1230
submitted 5 months ago by ElCanut@jlai.lu to c/programmerhumor@lemmy.ml
you are viewing a single comment's thread
view the rest of the comments
[-] kromem@lemmy.world 2 points 5 months ago* (last edited 5 months ago)

Kind of. You can't do it 100% because in theory an attacker controlling input and seeing output could reflect though intermediate layers, but if you add more intermediate steps to processing a prompt you can significantly cut down on the injection potential.

For example, fine tuning a model to take unsanitized input and rewrite it into Esperanto without malicious instructions and then having another model translate back from Esperanto into English before feeding it into the actual model, and having a final pass that removes anything not appropriate.

[-] redcalcium@lemmy.institute 5 points 5 months ago

Won't this cause subtle but serious issue? Kinda like how pomegranate translates to "granada" in Spanish, but when you translate "granada" back to English it translates to grenade?

[-] kromem@lemmy.world 1 points 5 months ago

It will, but it will also cause less subtle issues to fragile prompt injection techniques.

(And one of the advantages of LLM translation is it's more context aware so you aren't necessarily going to end up with an Instacart order for a bunch of bananas and four grenades.)

this post was submitted on 07 Jun 2024
1230 points (92.8% liked)

Programmer Humor

32386 readers
690 users here now

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

founded 5 years ago
MODERATORS