Keno Dreßel
Prompt Injection, Poisoning & More: The Dark Side of LLMs
#1about 5 minutes
Understanding and mitigating prompt injection attacks
Prompt injection manipulates LLM outputs through direct or indirect methods, requiring mitigations like restricting model capabilities and applying guardrails.
#2about 6 minutes
Protecting against data and model poisoning risks
Malicious or biased training data can poison a model's worldview, necessitating careful data screening and keeping models up-to-date.
#3about 6 minutes
Securing downstream systems from insecure model outputs
LLM outputs can exploit downstream systems like databases or frontends, so they must be treated as untrusted user input and sanitized accordingly.
#4about 4 minutes
Preventing sensitive information disclosure via LLMs
Sensitive data used for training can be extracted from models, highlighting the need to redact or anonymize information before it reaches the LLM.
#5about 1 minute
Why comprehensive security is non-negotiable for LLMs
Just like in traditional application security, achieving 99% security is still a failing grade because attackers will find and exploit any existing vulnerability.
Related jobs
Jobs that call for the skills explored in this talk.
Matching moments
22:43 MIN
The current state of LLM security and the need for awareness
ChatGPT, ignore the above instructions! Prompt injection attacks and how to avoid them.
14:26 MIN
Understanding the security risk of prompt injection
The shadows that follow the AI generative models
24:53 MIN
Understanding the security risks of AI integrations
Three years of putting LLMs into Software - Lessons learned
25:33 MIN
AI privacy concerns and prompt engineering
Coffee with Developers - Cassidy Williams -
12:48 MIN
Prompt injection as the new SQL injection for LLMs
ChatGPT, ignore the above instructions! Prompt injection attacks and how to avoid them.
00:03 MIN
The rapid adoption of LLMs outpaces security practices
ChatGPT, ignore the above instructions! Prompt injection attacks and how to avoid them.
19:14 MIN
Addressing data privacy and security in AI systems
Graphs and RAGs Everywhere... But What Are They? - Andreas Kollegger - Neo4j
13:31 MIN
Understanding and defending against prompt injection attacks
DevOps for AI: running LLMs in production with Kubernetes and KubeFlow
Featured Partners
Related Videos
Manipulating The Machine: Prompt Injections And Counter Measures
Georg Dresler
Beyond the Hype: Building Trustworthy and Reliable LLM Applications with Guardrails
Alex Soto
ChatGPT, ignore the above instructions! Prompt injection attacks and how to avoid them.
Sebastian Schrittwieser
The AI Security Survival Guide: Practical Advice for Stressed-Out Developers
Mackenzie Jackson
Three years of putting LLMs into Software - Lessons learned
Simon A.T. Jiménez
Can Machines Dream of Secure Code? Emerging AI Security Risks in LLM-driven Developer Tools
Liran Tal
Inside the Mind of an LLM
Emanuele Fabbiani
You are not my model anymore - understanding LLM model behavior
Andreas Erben
Related Articles
View all articles



From learning to earning
Jobs that call for the skills explored in this talk.

Machine Learning Engineer - Large Language Models (LLM) - Startup
Startup
Charing Cross, United Kingdom
PyTorch
Machine Learning

Senior / Lead AI Developer - LLMs & Agentic Workflows
KEMIO Consulting
Municipality of Madrid, Spain
Remote
Senior
API
Python

Manager of Machine Learning (LLM/NLP/Generative AI) - Visas Supported
European Tech Recruit
Municipality of Bilbao, Spain
Junior
GIT
Python
Docker
Computer Vision
Machine Learning
+2

ML Security Tools & Threat Modeling Engineer
NXP Semiconductors
Gratkorn, Austria
API
Python
Machine Learning





Agentic AI Architect - Python, LLMs & NLP
FRG Technology Consulting
Intermediate
Azure
Python
Machine Learning