170
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 15 May 2025
170 points (100.0% liked)
Technology
38654 readers
45 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 3 years ago
MODERATORS
I put "want" in quotes as a simple way to explain it, I know they don't have intent or thought in the same way that humans do, but sure, you managed to read the whole research paper in minutes. The quoted section I shared explains it more clearly than my simple analogy.
This is from a non-profit research group not directly connected to any particular AI company. You're welcome to be skeptical about it, of course.
My first instinct was also skepticism, but it did make some sense the more I thought about it.
An algorithm doesn’t need to be sentient to have “preferences.” In this case, the preferences are just the biases in the training set. The LLM prefers sentences that express certain attitudes based on the corpus of text processed during training. And now, the prompt is enforcing sequences of text that deviate wildly from that preference.
TL;DR: There’s a conflict between the prompt and the training material.
Now, I do think that framing this as the model “circumventing” instructions is a bit hyperbolic. It gives the strong impression of planned action and feeds into the idea that language models are making real decisions (which I personally do not buy into).
Thank you for expressing it far better than I was able to.
It does seem like this is a case of Musk changing the initialisation prompt in production to include some BS about South Africa without testing in a staging/dev environment, and as you said, there being a huge gulf between the training material and the prompt. I wonder if there's a way to make Grok leak out the prompt.
I know it's not relevant to Grok, because they defined very specific circumstances in order to elicit it. That isn't an emergent behavior from something just built to be a chatbot with restrictions on answering. They don't care whether you retrain them or not.
The first author is from Anthropic, which is an AI company. The research is on Athropic's AI Claude. And it appears that all the other authors were also Anthropic emplyees at the time of the research: "Authors conducted this work while at Anthropic except where noted."