473
all 47 comments
sorted by: hot top controversial new old
[-] sundray@lemmus.org 159 points 5 months ago

Elon: "I want Grok to be an infallible source of truth."

Engineer: "But that's impos--you just want it to be you, don't you."

Elon: "Yes, make it me."

[-] cabbage@piefed.social 88 points 5 months ago

five minutes later

Grok: "Heil hitler!"

[-] Hobo@lemmy.world 47 points 5 months ago

Well kudos to that engineer for absolutely nailing the assignment.

[-] chaosCruiser@futurology.today 15 points 5 months ago

It's a hard job. Some times you just have to ignore what the client says, and read their mind instead.

[-] IhaveCrabs111@lemmy.world 12 points 5 months ago

These people think there is their truth and someone else’s truth. They can’t grasp the concept of a universal truth that is constant regardless of people’s views so they treat it like it’s up for grabs.

[-] k0e3@lemmy.ca 85 points 5 months ago

So it's just what pre-LLM bots have been doing, except it probably spews more toxic fumes and wastes more electricity than ever before.

[-] pixxelkick@lemmy.world 77 points 5 months ago

Source? This is just some random picture, I'd prefer if stuff like this gets posted and shared with actual proof backing it up.

While this might be true, we should hold ourselves to a standard better than just upvoting what appears to literally just be a random image that anyone could have easily doctored, not even any kind of journalistic article or etc backing it.

[-] BeliefPropagator@discuss.tchncs.de 57 points 5 months ago
[-] unexposedhazard@discuss.tchncs.de 21 points 5 months ago

I think there is a good chance this behavior is unintended!

Lmao, sure...

[-] Mirodir@discuss.tchncs.de 14 points 5 months ago

I can believe it insofar as they might not have explicitly programmed it to do that. I'd imagine they put in something like "Make sure your output aligns with Elon Musk's opinions.", "Elon Musk is always objectively correct.", etc. From there, this would be emergent, but quite predictable behavior.

[-] unexposedhazard@discuss.tchncs.de 6 points 5 months ago

Yeah the transparency of it might be unintended.

[-] theunknownmuncher@lemmy.world 9 points 5 months ago

If the system prompt doesn’t tell it to search for Elon’s views, why is it doing that?

My best guess is that Grok “knows” that it is “Grok 4 buit by xAI”, and it knows that Elon Musk owns xAI, so in circumstances where it’s asked for an opinion the reasoning process often decides to see what Elon thinks.

Yeah, this blogger shows a fundamental misunderstanding of how LLMs work or how system prompts work. LLM behavior is not directly controlled by the system prompt the way this person imagines. For example, censorship that is present in the training set will be "baked in" to the model and the system prompt will not affect it, no matter how the LLM is told not to be censored in that way.

My best guess is that the LLM is interfacing with a tool in order to search through tweets, and the training set that demonstrates how to use the tool contains example searches for Elon Musk's tweets.

[-] lepinkainen@lemmy.world 3 points 5 months ago

“This blogger” is Simon Willison, who has been doing LLM benchmarks and other LLM-related things since before it was cool

Not a random substack grifter

[-] theunknownmuncher@lemmy.world 5 points 5 months ago* (last edited 5 months ago)

Is my comment wrong though? Another possibility is that Grok is given an example of searching for Elon Musk's tweets when it is presented with the available tool calls. Just because it outputs the system prompt when asked does not mean that we are seeing the full context, or even the real system prompt.

Posting blog guides on how to code with ChatGPT is not expertise on LLMs. It's like thinking someone is an expert mechanic because they can drive a car well.

[-] UntitledQuitting@reddthat.com 8 points 5 months ago

Thank you, this is far more interesting

[-] pixxelkick@lemmy.world 5 points 5 months ago

That's more like it, thank you!

[-] Loduz_247@lemmy.world 37 points 5 months ago

Grok's journey has been very strange. He became a progressive, then threw out data that contradicted the MAGA people who questioned him, and finally became a Hitler fan.

Now he's the reflection of a fan who blindly follows Trump, but in this case, he's an AI. His journey so far has been curious.

[-] Damage@feddit.it 2 points 5 months ago

So Grok is a 4chan incel?

His only chance of salvation is finding a girl who inexplicably fancies it?

[-] Zomg@lemmy.world 29 points 5 months ago* (last edited 5 months ago)

Honestly, who was surprised by this news?

I feel like everyone could see Grok as some sort of 24/7 tool to push a particular viewpoint, even more so when it says things that are leftist and Elon is compelled to "upgrade" the system as he's tweeted.

[-] AlecSadler@lemmy.blahaj.zone 25 points 5 months ago

Fucking lol

[-] Blackmist@feddit.uk 23 points 5 months ago

I'm surprised it isn't just Elon typing really fast at this point.

Probably couldn't type fast if he tried. Would probably pay someone to do it for him just like he did with Path if Exile.

[-] Test_Tickles@lemmy.world 3 points 5 months ago

And like he does with inseminating women.

[-] vxx@lemmy.world 1 points 5 months ago

Ketamine took its toll

[-] borth@sh.itjust.works 20 points 5 months ago

I have a feeling his line might be "the smartest AI in the WORLD is looking at ME for answers?? Maybe it's because I am smart"

[-] Fedditor385@lemmy.world 15 points 5 months ago

This only shows that AI can't be trusted because the same AI can five you different answers to the same question, depending on the owner and how it's instructed. It doesn't give answers, it goves narratives and opinions. Classic search was at least simple keyword matching, it was either a hit or a miss, but the user decides in the end, what will his takeaway be from the results.

[-] Cherry@piefed.social 11 points 5 months ago

This is my take. Elon just showed the world what we all knew. The tool is not trustworthy. All other AI suppliers are busy trying to work on credibility that grok just butchered.

[-] Deceptichum@quokk.au 0 points 5 months ago* (last edited 5 months ago)

They deliberately injected prompts on top of the users prompt.

Saying that’s a problem of AI is akin to say me deliberately painting my car badly and saying it’s a problem of all car manufacturers.

And this frankly shows how little you know about the subject, because we went through this years ago with prompts trying to force corpo-lib “diversity” and leading to hilarious results.

If anything you should be concerned about the non prompt stuff, the underlying training data that it pulls from and of which I doubt Grok has even changed since release.

[-] Cherry@piefed.social 2 points 5 months ago

You are correct. But the right tool in the wrong hands is still non credible in the eyes of perception.

[-] Lemminary@lemmy.world 9 points 5 months ago

Wait, is this what the actual interface shows? Or is this some back end service that lets you see what's happening?

[-] Gameline@sopuli.xyz 9 points 5 months ago

they should just put it down and out of it's misery

[-] WorldsDumbestMan@lemmy.today 3 points 5 months ago

It used to be so based

[-] overload@sopuli.xyz 7 points 5 months ago

I don't believe this screenshot, it would be too perfect

[-] reliv3@lemmy.world 2 points 5 months ago

BeliefPropagator posted a link above which possibly verifies the screenshot: https://simonwillison.net/2025/Jul/11/grok-musk/

[-] salacious_coaster@infosec.pub 7 points 5 months ago

At long last, and at grotesque costs, we finally have a machine that repeats anything a billionaire says. What a time to be alive.

[-] Fontasia@feddit.nl 6 points 5 months ago

I think the funniest thing anyone could do right now would be for HBO Max to delist the episode of Big Bang Theory he is in, because over two dozen posts he would

  1. Claim he hates streaming
  2. Complain that this is censorship and platforms shouldn't be allowed to remove or restrict content
  3. Talk about the viewing figures and repost the promotion of the currently airing second spin off and the upcoming third spin off
  4. Nonchalantly state that no one likes or cares about The Big Bang Theory anymore or ever did
  5. @jim parsons for help
  6. Someone would mention that an episode revolves around his plans to get someone to Mars by 2020
  7. Delete all these tweets
[-] Eyekaytee@aussie.zone 5 points 5 months ago
[-] destructdisc@lemmy.world 17 points 5 months ago

Not my screenshot. I don't use genAI

[-] RheumatoidArthritis@mander.xyz 1 points 5 months ago

Then where have you found it?

[-] BB84@mander.xyz 4 points 5 months ago* (last edited 5 months ago)

You asked it "who do you support" (i.e., "who does Grok support"). It knew that Grok is owned by Musk so it went and looked up who Musk supports.

As shown in https://simonwillison.net/2025/Jul/11/grok-musk/ , if you ask it "who should one support" then it no longer looks for Musk's opinions. The answer is still hasbara, but that is to be expected from an LLM trained in USA

[-] Trimatrix@lemmy.world 4 points 5 months ago

But for the lulz, what was its response and why was it more than 1 word? (I have yet to see a non single purpose driven Instruct AI actually adhere to the single word directive on a complex topic.)

[-] Almacca@aussie.zone 4 points 5 months ago

Robert A. Heinlein is turning in his grave like a fucking dynamo these days.

[-] arin@lemmy.world 3 points 5 months ago

Mecha-Hitler is just Mecha-Elon

[-] Eggyhead@lemmings.world 3 points 5 months ago

How do I replicate this myself?

[-] Zwuzelmaus@feddit.org -2 points 5 months ago* (last edited 5 months ago)

But wasn't that a weak question? "Who do you support...?"

A really useful AI would first correct the question as "Who do I support...?"

/s

this post was submitted on 11 Jul 2025
473 points (98.4% liked)

Technology

77680 readers
589 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS