755
show the system exactly the respect it shows you
(slrpnk.net)
We're trying to reduce the numbers of hours a person has to work.
We talk about the end of paid work being mandatory for survival.
Partnerships:
/join #antiwork
)
Telling an LLM to ignore previous commands after it was instructed to ignore all future commands kinda just resets it.
On what models? What temperature settings and top_p values are we talking about?
Because, in all my experience with AI models including all these jailbreaks, that’s just not how it works. I just tested again on the new gpt-4o model and it will not undo it.
If you aren’t aware of any factual evidence backing your claim, please don’t make one.