2213
you are viewing a single comment's thread
view the rest of the comments
[-] danielbln@lemmy.world 8 points 10 months ago* (last edited 10 months ago)

Depends on the model/provider. If you're running this in Azure you can use their content filtering which includes jailbreak and prompt exfiltration protection. Otherwise you can strap some heuristics in front or utilize a smaller specialized model that looks at the incoming prompts.

With stronger models like GPT4 that will adhere to every instruction of the system prompt you can harden it pretty well with instructions alone, GPT3.5 not so much.

this post was submitted on 21 Jan 2024
2213 points (99.6% liked)

Programmer Humor

19623 readers
222 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 1 year ago
MODERATORS