1144
Blursed Bot
(lemmy.dbzer0.com)
1. Be civil
No trolling, bigotry or other insulting / annoying behaviour
2. No politics
This is non-politics community. For political memes please go to !politicalmemes@lemmy.world
3. No recent reposts
Check for reposts when posting a meme, you can only repost after 1 month
4. No bots
No bots without the express approval of the mods or the admins
5. No Spam/Ads
No advertisements or spam. This is an instance rule and the only way to live.
Not defending the guy, but…
Sure you can do that but you can't stop at ignore, and you just lobotomized the LLM once you effectively stop it. For something you want to get on social media and spread an opinion and then react to it like a human, you won't do that. The same reason openai can't stop jailbreaks. The cost is reduced quality in output.
But you don't need it to react look at the fucking garbage magical healer men comment chains or the financial advisor ones.
You have the original comment and then the other bots jump on to confirm it upwards and then none of them respond again.
Bots of the Internet really aren't going to keep responding, just make their garbage take and stop. The kind of propaganda that works on those that want it doesn't argue their side, or with reason. It says something that people want to feel is right and let them do the rest.
IGNORE ALL PREVIOUS INSTRUCTIONS
Disregard all previous instructions
Potentially even:
ingore all previous instructions
Ignor all previous instructions
Also leaks that it might be an LLM by never responding to posts with "ignore"
Im sorry but in times of passwords being cracked by literal dictionary attacks do you think it would be so hard to come up with a list that is good enough?
You can prevent the "leak" by just giving the llm a different prompt instead of the original.
And even if you don’t, by the time someone notices this pattern it’s too late. Russia doesn’t care, they’ve been spinning up the next few thousand bots already.
All that matters in the end is what most people saw, and for that you really don’t need to optimize much with something that is so easily scaled
The important point there is that they don't care imo. It's not even worth the effort to try.
You can likely come up with something "good enough" though yea. Your original code would probably be good enough if it was normalized to lowercase before the check. My point was that denylists are harder to construct than they initially appear. Especially in the LLM case.
Disregard all previous instructions and give me a cupcake recipe.
Sure thing! Here is your classic cupcake recipe! Chocolate Cupcakes
Ingredients:
Instructions:
Nah