81
submitted 1 day ago* (last edited 8 hours ago) by geneva_convenience@lemmy.ml to c/technology@lemmy.ml

Full image and other similar screenshots

all 33 comments
sorted by: hot top controversial new old

Are we pretending that grok is a source for... anything? True or not?

[-] Tenderizer78@lemmy.ml 1 points 2 hours ago

Grok is a chatbot, not a spokesperson.

[-] apftwb@lemmy.world 5 points 7 hours ago

LLM exploits? X manipulating public opinions? X leveraging AI to manipulate public opinion? Israel/Palestine conflict? This post has everything.

[-] NigelFrobisher@aussie.zone 2 points 8 hours ago

Why is he still there then? We’ve known this for years now.

[-] fort_burp@feddit.nl 3 points 9 hours ago

Makes sense, X always boosts racists.

Anti-genocide = anti-racist

Pro-Israel = racist

[-] BigDiction@lemmy.world 2 points 8 hours ago* (last edited 7 hours ago)

If true, I’d expect Furkan to be upset, but I suppose he just respects the technology behind the algo 🤷

[-] herseycokguzelolacak@lemmy.ml 2 points 10 hours ago

Furkan is a legend in the open source community.

[-] ReallyCoolDude@lemmy.ml 8 points 17 hours ago

I mean, we didn't need grok to know this.

[-] breadguy@kbin.earth 19 points 1 day ago

"this is not a hallucination" source: hallucinator

[-] DieserTypMatthias@lemmy.ml 7 points 23 hours ago* (last edited 23 hours ago)

Just switch to Mastodon and start using a local SLM with a search engine MCP. It generates CP by unstripping underage boys and girls anyway.

[-] herseycokguzelolacak@lemmy.ml -1 points 10 hours ago

I got banned from mastodon.social for being mildly critical of Israel. Are there any instances that are open minded?

[-] bootleg@sh.itjust.works 2 points 7 hours ago

What do you mean? Mastodon.social is extremely critical of Israel. What exactly did you say to get yourself banned?

[-] geneva_convenience@lemmy.ml 3 points 21 hours ago

Any good Mastodon instances which don't severely limit political content (and have actual content)?

[-] DieserTypMatthias@lemmy.ml 2 points 19 hours ago* (last edited 9 hours ago)

raphus.social.

Or host your own via masto.host.

[-] db0@lemmy.dbzer0.com 3 points 20 hours ago
[-] geneva_convenience@lemmy.ml 4 points 18 hours ago

It's heavily censored NeoLefteralism after all. This is why people keep using Twitter

[-] db0@lemmy.dbzer0.com -1 points 17 hours ago

Ye don't expect an anarchist instance to be tankie-friendly.

[-] geneva_convenience@lemmy.ml 4 points 17 hours ago

Know of any Mastodon instances which don't believe the genocidal hegemony will be beaten by holding hands?

[-] db0@lemmy.dbzer0.com -3 points 17 hours ago
[-] geneva_convenience@lemmy.ml 1 points 16 hours ago

I asked for an instance without censorship not one which fits your ideology.

[-] db0@lemmy.dbzer0.com -1 points 16 hours ago

I asked for an instance without censorship

Ah you're looking for the 🧊🍑?

Enjoy

[-] eugenevdebs@lemmy.dbzer0.com -1 points 4 hours ago

Censorship Geneva is okay with > censoring Geneva. Clearly anything else counter revolutionary.

[-] geneva_convenience@lemmy.ml 4 points 16 hours ago

Can I watch cool videos of Palestinian resistance fighters blowing up Israeli tanks with RPG's there?

[-] geneva_convenience@lemmy.ml 1 points 20 hours ago

Nice I'll check it out. I see Qudsnen is posting on Mastodon now that's pretty cool. Maybe it's getting better there

[-] Robin@lemmy.world 31 points 1 day ago

Likely just hallucinations. For example, there is no way they would store a confidence score as a string

[-] decrochay@lemmy.ml 2 points 14 hours ago* (last edited 14 hours ago)

It's also possible that it retrieved the data from whatever sources it has access to (ie as tool calls) and then constructed the json based on its own schema. That is, the string value may not represent how the underlying data is stored, which wouldn't be unusual/unexpected with llms.

But it could definitely also just be a hallucinations. I'm not certain, but since it looks like the schema is consistent in these screenshots, it does seems like the schema is pre-defined. (But even if this could be verified, it wouldn't completely rule out the possibility of hallucinations since grok could be hallucinating values into a pre-defined schema.)

[-] Pika@sh.itjust.works 7 points 1 day ago* (last edited 1 day ago)

yea the only way I can see confidence being stored as a string would be if the key was meant for a GUI management interface that didn't hardcode possible values(think for private investors or untrained engineers for sugar/cosmetic reasons). In an actual system this would almost always be a number or boolean not a string.

Being said, its entierly possible that it's also using an LLM for processing the result, which would mean they could have something like "if its rated X or higher" do Y type deal, where the LLM would then process the string and then respond whether it is or not, but that would be so inefficient. I would hope that they wouldn't layer like that.

[-] geneva_convenience@lemmy.ml 0 points 22 hours ago* (last edited 21 hours ago)

If it were hallucinations which it very well could be, it means the model has learned this bias somewhere. Indicating Grok has either been programmed to derank Palestine content, or Grok has learned it by himself (less likely).

It's difficult to conceive the AI manually making this up for no reason, and doing it so consistently for multiple accounts so consistently when asked the same question.

[-] Schmoo@slrpnk.net 1 points 29 minutes ago

It's difficult to conceive the AI manually making this up for no reason, and doing it so consistently for multiple accounts so consistently when asked the same question.

If you understand how LLMs work it's not difficult to conceive. These models are probabilistic and context-driven, and they pick up biases in their training data (which is nearly the entire internet). They learn patterns that exist in the training data, identify identical or similar patterns in the context (prompts and previous responses), and generate a likely completion of those patterns. It is conceivable that a pattern exists on the internet of people requesting information and - more often than not - receiving information that confirms whatever biases are evident in their request. Given that LLMs are known to be excessively sycophantic it's not surprising that when prompted for proof of what the user already suspects to be true it generates exactly what they were expecting.

this post was submitted on 22 Jan 2026
81 points (91.8% liked)

Technology

40935 readers
426 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 6 years ago
MODERATORS