Keno Dreßel

Prompt Injection, Poisoning & More: The Dark Side of LLMs

How can a simple chatbot be turned into a hacker? Explore the critical security risks of LLMs, from prompt injection to data poisoning.

Prompt Injection, Poisoning & More: The Dark Side of LLMs
#1about 5 minutes

Understanding and mitigating prompt injection attacks

Prompt injection manipulates LLM outputs through direct or indirect methods, requiring mitigations like restricting model capabilities and applying guardrails.

#2about 6 minutes

Protecting against data and model poisoning risks

Malicious or biased training data can poison a model's worldview, necessitating careful data screening and keeping models up-to-date.

#3about 6 minutes

Securing downstream systems from insecure model outputs

LLM outputs can exploit downstream systems like databases or frontends, so they must be treated as untrusted user input and sanitized accordingly.

#4about 4 minutes

Preventing sensitive information disclosure via LLMs

Sensitive data used for training can be extracted from models, highlighting the need to redact or anonymize information before it reaches the LLM.

#5about 1 minute

Why comprehensive security is non-negotiable for LLMs

Just like in traditional application security, achieving 99% security is still a failing grade because attackers will find and exploit any existing vulnerability.

Related jobs
Jobs that call for the skills explored in this talk.
Picnic Technologies B.V.

Picnic Technologies B.V.
Amsterdam, Netherlands

Intermediate
Senior
Python
Structured Query Language (SQL)
+1

Featured Partners

Related Articles

View all articles
CH
Chris Heilmann
Dev Digest 138 - Are you secure about this?
Hello there! This is the 2nd "out of the can" edition of 3 as I am on vacation in Greece eating lovely things on the beach. So, fewer news, but lots of great resources. Many around the topic of security. Enjoy! News and ArticlesGoogle Pixel phones t...
Dev Digest 138 - Are you secure about this?
DC
Daniel Cranney
Dev Digest 196: AI Killed DevOps, LLM Political Bias & AI Security
Inside last week’s Dev Digest 196 . ⚖️ Political bias in LLMs 🫣 AI written code causes 1 in 5 security breaches 🖼️ Is there a limit to alternative text on images? 📝 CodeWiki - understand code better 🟨 Long tasks in JavaScript 👻 Scare yourself into n...
Dev Digest 196: AI Killed DevOps, LLM Political Bias & AI Security

From learning to earning

Jobs that call for the skills explored in this talk.