26
Consistent Jailbreaks in GPT-4, o1, and o3 - General Analysis
(generalanalysis.com)
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
I think a big part of it is just that many want control, they want to limit what we're capable of doing. They especially don't want us doing things that go against them and their will as companies. Which is why they try to block us from doing those things they dislike so much, like generating porn, or discussing violent content.
I noticed that certain prompts people used for the purpose of AI poisoning are now marked as against the terms of service on ChatGPT so the whole "control" thing doesn't seem so crazy.