Sebastian Schrittwieser

ChatGPT, ignore the above instructions! Prompt injection attacks and how to avoid them.

Prompt injection is the new SQL injection for AI. Learn how to secure your LLM applications before a malicious prompt takes over your system.

ChatGPT, ignore the above instructions! Prompt injection attacks and how to avoid them.
#1about 2 minutes

The rapid adoption of LLMs outpaces security practices

New technologies like large language models are often adopted quickly without established security best practices, creating new vulnerabilities.

#2about 4 minutes

How user input can override developer instructions

A prompt injection occurs when untrusted user input contains instructions that hijack the LLM's behavior, overriding the developer's original intent defined in the context.

#3about 4 minutes

Using prompt injection to steal confidential context data

Attackers can use prompt injection to trick an LLM into revealing its confidential context or system prompt, exposing proprietary logic or sensitive information.

#4about 4 minutes

Expanding the attack surface with plugins and web data

LLM plugins that access external data like emails or websites create an indirect attack vector where malicious prompts can be hidden in that external content.

#5about 2 minutes

Prompt injection as the new SQL injection for LLMs

Prompt injection mirrors traditional SQL injection by mixing untrusted data with developer instructions, but lacks a clear mitigation like prepared statements.

#6about 3 minutes

Why simple filtering and encoding fail to stop attacks

Common security tactics like input filtering and blacklisting are ineffective against prompt injections due to the flexibility of natural language and encoding bypass techniques.

#7about 4 minutes

Using user confirmation and dual LLM models for defense

Advanced strategies include requiring user confirmation for sensitive actions or using a dual LLM architecture to isolate privileged operations from untrusted data processing.

#8about 5 minutes

The current state of LLM security and the need for awareness

There is currently no perfect solution for prompt injection, making developer awareness and careful design of LLM interactions the most critical defense.

Related jobs
Jobs that call for the skills explored in this talk.

Featured Partners

Related Articles

View all articles
CH
Chris Heilmann
Dev Digest 138 - Are you secure about this?
Hello there! This is the 2nd "out of the can" edition of 3 as I am on vacation in Greece eating lovely things on the beach. So, fewer news, but lots of great resources. Many around the topic of security. Enjoy! News and ArticlesGoogle Pixel phones t...
Dev Digest 138 - Are you secure about this?
LM
Luis Minvielle
How to Bypass ChatGPT’s Filter With Examples
Since dropping in November 2022, ChatGPT has helped plenty of professionals satisfy an unpredictable assortment of tasks. Whether for finding an elusive bug, writing code, giving resumes a glow-up, or even starting a business, the not-infallible but ...
How to Bypass ChatGPT’s Filter With Examples
DC
Daniel Cranney
Dev Digest 182: GPT5 Prompts, MCP Vulnerabilities, Code Traps
Inside last week’s Dev Digest 182 . 📝 A guide to prompting GPT-5 ⏰ Extreme hours at AI startups 💻 AI is a Junior Dev, and it needs a lead 🐴 Trojans embedded in SVG’s ⚠️ The State of MCP Security ⚒️ A reference manual for people who design and build ...
Dev Digest 182: GPT5 Prompts, MCP Vulnerabilities, Code Traps
AB
Adrien Book
Top 5 ChatGPT Plugins for Developers
The last few weeks have been very interesting in the AI space. We saw the release of a new updated version of ChatGPT from GPT-3.5 to GPT-4. Within a couple of days, Google soft-launched their competitor AI chatbot, Bard (available in the US and UK)....
Top 5 ChatGPT Plugins for Developers

From learning to earning

Jobs that call for the skills explored in this talk.

AI Prompt Engineer

AI Prompt Engineer

SonarSource
Bochum, Germany

Remote
API
Python
Data analysis
Machine Learning
+2