1
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 14 Jun 2025
1 points (100.0% liked)
Cybersecurity
2 readers
48 users here now
An umbrella community for all things cybersecurity / infosec. News, research, questions, are all welcome!
Rules
Community Rules
- Be kind
- Limit promotional activities
- Non-cybersecurity posts should be redirected to other communities within infosec.pub.
founded 2 years ago
MODERATORS
"Design Patterns for Securing LLM Agents against Prompt Injections (2025) by Luca Beurer-Kellner, Beat Buesser, Ana-Maria Creţu, Edoardo Debenedetti, Daniel Dobos, Daniel Fabian, Marc Fischer, David Froelicher, Kathrin Grosse, Daniel Naeff, Ezinwanne Ozoani, Andrew Paverd, Florian Tramèr, and Václav Volhejn.
I’m so excited to see papers like this starting to appear. I wrote about Google DeepMind’s Defeating Prompt Injections by Design paper (aka the CaMeL paper) back in April, which was the first paper I’d seen that proposed a credible solution to some of the challenges posed by prompt injection against tool-using LLM systems (often referred to as “agents”).
This new paper provides a robust explanation of prompt injection, then proposes six design patterns to help protect against it, including the pattern proposed by the CaMeL paper."
https://simonwillison.net/2025/Jun/13/prompt-injection-design-patterns/