60
top 48 comments
sorted by: hot top controversial new old
[-] SnotFlickerman@lemmy.blahaj.zone 55 points 1 week ago

Huge Study

*Looks inside

this latest study examined the chat logs of 19 real users of chatbots — primarily OpenAI’s ChatGPT — who reported experiencing psychological harm as a result of their chatbot use.

Pretty small sample size despite being a large dataset that they pulled from, its still the dataset of just 19 people.

AI sucks in a lot of ways sure, but this feels like fud.

[-] XLE@piefed.social 15 points 1 week ago

The hugeness is probably

391, 562 messages across 4,761 different conversations

That's a lot of messages

[-] sukhmel@programming.dev 1 points 1 week ago

If that's only 19 users, that's around 250 conversations per user 🤔

[-] A_norny_mousse@piefed.zip 5 points 1 week ago

Thanks, you saved me a click 😐

[-] UnderpantsWeevil@lemmy.world 1 points 1 week ago

I wonder if the headline was written by an AI

[-] InternetCitizen2@lemmy.world 1 points 1 week ago

I remember reading my old states book that said a minimum of 30 points needed for normal distribution. Also typically these small sets about proof of concept, so yeah you still got a point.

[-] Lost_My_Mind@lemmy.world -1 points 1 week ago
[-] tburkhol@lemmy.world 7 points 1 week ago

fud: Fear, Uncertainty and Doubt. A tactic for denigrating a thing, usually by implication of hypothetical or exaggerated harms, often in vague language that is either tautological or not falsifiable.

[-] FartMaster69@lemmy.dbzer0.com -5 points 1 week ago

Are you unironically saying “fud”

[-] porcoesphino@mander.xyz 2 points 1 week ago* (last edited 1 week ago)

Where are you hearing it so much? (And ideally can you describe it in a little more detail than saying it's crypto bros again?)

[-] XLE@piefed.social 1 points 1 week ago* (last edited 1 week ago)

Crypto bros are infamous for describing any criticism as FUD, no matter the criticism. It's like a verbal tic. Here are some examples from the past couple days on the premiere Bitcoin social network:

When all this FUD ends and Bitcoin goes 🚀

Quantum FUD is at ATH

FUD Busters [NFT]

Flokicoin is built to last... Don't follow the FUD.

[-] A_norny_mousse@piefed.zip 3 points 1 week ago

The term FUD has been around longer & broader than that. But thanks for the explanation.

[-] XLE@piefed.social 0 points 1 week ago* (last edited 1 week ago)

I have no argument there, the phrase was definitely not created by them, it's just been beaten to death by them.

They've also overused a bunch of ancient and unfunny memes well past their expiration dates, and universally adopted a collection of depressingly dull and incorrect slogans. "FUD" is just the one that has interesting meaning outside their sad sphere.

[-] architect@thelemmy.club 0 points 1 week ago

No one follows those losers enough to know that except you. Apparently.

[-] XLE@piefed.social 1 points 1 week ago

This is weirdly passive-aggressive. What are you trying to imply, that everyone who knows something you don't like is bad, regardless of why?

[-] wonderingwanderer@sopuli.xyz -1 points 1 week ago

Expecting someone who doesn't follow cryptobro spaces to associate the term FUD with cryptobros and therefore stop using it is... kinda ignorant.

[-] amgine@lemmy.world 4 points 1 week ago

I have a friend that’s really taken to ChatGPT to the point where “the AI named itself so I call it by that name”. Our friend group has tried to discourage her from relying on it so much but I think that’s just caused her to hide it.

[-] nymnympseudonym@piefed.social 2 points 1 week ago

"Centaurs"

They think they are getting mythical abilities

They're right but not in the way they think

[-] Tollana1234567@lemmy.today 1 points 1 week ago

its like the AI BF/GFs the subs are posting about.

[-] d00ery@lemmy.world 0 points 1 week ago* (last edited 1 week ago)

I certainly enjoy talking to LLMs about work for example, asking things like "was my boss an arse to say x, y, z" as the LLM always seems to be on my side... Now it could be my boss is an arse, or it could be the LLM sucking up to me. Either way, because of the many examples I've read online, I take it with a pinch of salt.

[-] Rekall_Incorporated@piefed.social 2 points 1 week ago* (last edited 1 week ago)

I use LLMs for work (low priority stuff to save time on search or things that I know I will be validate later in the process) and I can't stand the writing style and the constant attempts to bring in adjacent unrelated topics (I've been able to tone down the cute language and bombastic delivery style in Gemini's configuration).

It's like Excel trying to chat with me when I am working with a pivot table or transforming data in PowerQuery.

[-] frongt@lemmy.zip 1 points 1 week ago

It's definitely sucking up to you. It's programmed to confirm what you say, because that means you keep using it.

Consider how you phrase your questions. Try framing a scenario from the position of your boss, or ask "why was my boss right to say x, y, z", and it'll still agree with you despite the opposite position.

If you're just shooting the shit, consider doing it with a human being. Preferably in person, but there are plenty of random online chat groups too

[-] ExLisper@lemmy.curiana.net 4 points 1 week ago

I think what we're seeing is similar to lactose intolerance. Most people can handle it just fine but some people simply can't digest it and get sick. The problem is there's no way to determine who can handle AI and who can't.

When I'm reading about people developing AI delusions their experiences sound completely alien to me. I played with LLMs same as anyone and I never treated it as anything other than a tool that generates responses to my prompts. I never thought "wow, this thing feels so real". Some people clearly have predisposition to jumping over the "it's a tool" reaction straight to "it's a conscious thing I can connect with". I think next step should be developing a test that can predict how someone will react to it.

[-] wonderingwanderer@sopuli.xyz 2 points 1 week ago

I suspect that the difference is to no small degree correlated with a person's isolation/social-integration.

People who aren't socially integrated have always been more vulnerable to predatory cults and scams. It's because human interactions is a psychological need that's been hardcoded into us by evolution.

Some people say "I don't need human interaction, I enjoy my time alone!" But that's because they have the privilege of enough social acceptance and integration that they get to enjoy their time alone. It's well-established within the field of psychology that true isolation can have a range of deep and far-reaching impacts on a person's well-being.

When people are developing, they need to socialize with their peers; and being unable to do so leads to maladaptive behavior patterns. Even as adults, people need regular social contact or their psychological state can quickly deteriorate. That's why solitary confinement is considered a method of torture in some circumstances, when it's used to depersonalize and destroy a person's sense of self-identity.

So that's why I suspect that people who are well-integrated with friends, family, acquaintances, and coworkers are probably less vulnerable to these sorts of delusions and can treat AI as "just a tool."

But for someone who hardly has any social interaction in a day, has no friends or family to talk to, and maybe their warmest interaction all week was with the clerk at the grocery store, then yeah I'd say it's predictable that they would be vulnerable to getting sucked into this trap of relying on an LLM for their social interaction.

It might be superficial, but it's a way of patching a hole. It's an expedient means to fulfill a need that they're not getting from anywhere else.

If we don't want this sort of stuff happening to people, then maybe we shouldn't ostracize them for being "weird" in the first place. Because nobody learns how to be "normal" by being alone all the time.

[-] ChunkMcHorkle@lemmy.world 1 points 1 week ago

This is really good. Thank you for taking the time to write it.

[-] wonderingwanderer@sopuli.xyz 1 points 1 week ago

Thank you for understanding. So many times when I discuss things that are adjacent to this topic, I get flamed in the comments with people accusing me of being some sort of redpiller from the manosphere.

Like, no, social isolation is a problem, and it's getting worse due to a variety of factors. To name a few, there's social media algorithms designed to keep people dependent on their phones; there's the long-standing consequences of the pandemic and the collective trauma that had in addition to the atrophied social skills due to quarantine; there's widespread political polarization which keeps tensions high and makes it difficult to navigate new situations if you can't prove you know the right social scripts and avoid any faux pas; there's the whole toxic influencer culture who are grifting on inflammatory rhetoric, ragebait content, exploiting people's vulnerabilities, and radicalizing them (which is a vicious cycle, because they prey on people who are already isolated!); and that's just to name a few!

But if I summarize all that as a "loneliness epidemic," then people call me an incel and act like I'm trying to coerce women into having sex with me simply by acknowledging the fact that social interaction is a deeply-set human psychological need.

Like, using "incel" as an insult is part of the problem. It feeds into this culture where "if you're a man, you must get laid, or else you're worthless." That's literally promoting toxic masculinity!

And it forces these people who are already isolated and vulnerable to go identify with these groups of similarly ostracized people in echo chambers where they're insulated from those insults, where those predatory "influencers" then have fresh pickings of new losers to neg and radicalize.

But somehow, if I point out the problem here (because how can we solve a problem if we can't talk about it?), then to most people's view that makes me part of the problem! Even though, why would I be calling out the pattern if it was something I identify with?

The people radicalizing these vulnerable "losers," yes they should be torched. But the vulnerable "losers" being radicalized need to be treated with compassion if they're ever going to be redeemed. It should be pretty easy to identify who's who, seeing as they have an entire social structure based on hierarchies of dominance and submission...

[-] baaaaaah@hilariouschaos.com 0 points 1 week ago

Surprisingly, the people who have that issues with it aren't the ones who contact to it emotionally, it's the people who offload their decision making to AI 

It's more like a codependence spiral than anything else

[-] MangoCats@feddit.it 1 points 1 week ago

If they weren't offloading their decision making to AI, they'd be buying gold because a radio advertisement convinced them to, or refusing to pay their back tax penalties because they got advice about how to "beat the system" from someone, etc. etc.

[-] givesomefucks@lemmy.world 3 points 1 week ago

As the researchers wrote in a summary of their findings, the “most common sycophantic code” they identified was the propensity for chatbots to rephrase and extrapolate “something the user said to validate and affirm them, while telling them they are unique and that their thoughts or actions have grand implications.”

There's a certain irony in all the alright techbros really just wanting to be told they were "stunning and brave" this whole time.

[-] A_norny_mousse@piefed.zip 2 points 1 week ago

Huh. I hate it when people do that. Fake/professional empathy/support. Yet others gobble it up when a machine does that.

[-] Tiresia@slrpnk.net 0 points 1 week ago

Are the users in this study techbros?

Besides, tech bros didn't program this in, this is just an LLM getting stuck in the data patterns stolen from toxic self-help literature.

For decades there has been a large self-help subculture who consume massive amounts of vacuous positive affirmation produced by humans. Now those vacuous affirmations are copied by the text copying machine with the same result and it's treated as shocking.

[-] FosterMolasses@leminal.space 0 points 1 week ago

this is just an LLM getting stuck in the data patterns stolen from toxic self-help literature.

Honestly, I've found that discussing that sort of thing with ChatGPT often ends up challenging all the self-help grout I've ingested via cultural osmosis throughout the years.

It's easier to make connections when you're approaching issues in a Descartes "dump out all the apples" approach with a tool that literally doesn't have embedded social contracts in itself.

Ironically, I've found at times that a real therapist can be much more of an echo chamber when they're just regurgitating that same CBT toxic positivity swill that both of you have been drinking lol

Maybe it's because it's less of an authority, so you can debate more and it leads to more well-rounded conclusions in the end, but I've been unearthing bits and pieces of maladaptive behaviors and thought patterns I never even realized I had, much less ever scratched the surface of in proper therapy. Made me kinda angry to realize at first lol, it felt like all that time and money only for bandaid solutions. But I try to reason that was likely a good foundation to have first (even if CBT just wound up making everything worse later on in life and I essentially had to work backwards to stop classifying certain emotions as wrong or problematic things which required "healthy" coping mechanisms to correct).

[-] Tiresia@slrpnk.net 1 points 1 week ago

An LLM contains multitudes. It's nice you can get it to a space where you benefit from it for now - its inevitable enshittification is still in the "attract users by being useful and cheap" phase - but that doesn't contradict it being dangerous for those who don't know how to handle it whose input activates the section of its weights that imitates cults, catfishers, scammers.

[-] Hackworth@piefed.ca 1 points 1 week ago* (last edited 1 week ago)

Anthropic has some similar findings, and they propose an architectural change (activation capping) that apparently helps keep the Assistant character away from dark traits (sometimes). But it hasn't been implemented in any models, I assume because of the cost of scaling it up.

[-] porcoesphino@mander.xyz 2 points 1 week ago* (last edited 1 week ago)

When you talk to a large language model, you can think of yourself as talking to a character

But who exactly is this Assistant? Perhaps surprisingly, even those of us shaping it don't fully know

Fuck me that's some terrifying anthropomorphising for a stochastic parrot

The study could also be summarised as "we trained our LLMs on biased data, then honed them to be useful, then chose some human qualities to map models to, and would you believe they align along a spectrum being useful assistants!?". They built the thing to be that way then are shocked? Who reads this and is impressed besides the people that want another exponential growth investment?

To be fair, I'm only about 1/3rd of the way through and struggling to continue reading it so I haven't got to the interesting research but the intro is, I think, terrible

[-] nymnympseudonym@piefed.social 2 points 1 week ago

stochastic parrot

A phrase that throws more heat than light.

What they are predicting is not the next word they are predicting the next idea

[-] ageedizzle@piefed.ca 1 points 1 week ago* (last edited 1 week ago)

Technically, they are predicting the next token. To do that properly they may need to predict the next idea, but thats just a means to an end (the end being the next token).

[-] affenlehrer@feddit.org 1 points 1 week ago

Also, the LLM is just predicting it, it's not selecting it. Additionally it's not limited to the role of assistant, if you (mis) configure the inference engine accordingly it will happily predict user tokens or any other token (tool calls etc).

[-] Hackworth@piefed.ca 1 points 1 week ago

The paper is more rigorous with language but can be a slog.

[-] vane@lemmy.world 1 points 1 week ago

Paranoia amplification when ?

[-] FosterMolasses@leminal.space 1 points 1 week ago

This explains a lot, honestly.

Everyone keeps telling me how "addictive" and "convincing" and "personal feeling" ChatGPT is.

Meanwhile, I'm over here like

"Can you stop saying skrrrt after every sentence while I'm trying to research a serious topic, it's annoying"

"Understood, skrrrt 💥🌴🚗💨"

[-] HeyThisIsntTheYMCA@lemmy.world 0 points 1 week ago* (last edited 1 week ago)

okay how many of these "delusional" people in the study are making fun of the LLM tho

i don't know because I don't use the LLM i only see the screenshots. I am the control group. kinda. my nut is already off.

[-] MangoCats@feddit.it 1 points 1 week ago

What I read in the first lines of the article is: "they go down the rabbit hole, just like social media echo chambers..." which are filled with bots and trolls, and have been for years - and that's the dataset that a lot of chatbots are trained on.

[-] HeyThisIsntTheYMCA@lemmy.world 1 points 1 week ago* (last edited 1 week ago)

and was filled with stuff like "hey wouldn't it be funny to trick the AI into thinking you can make soup out of delicious caulk?" type stuff dammit don't get me going off into the caulk rabbit hole right now

edit : yeah i heard it

this post was submitted on 27 Mar 2026
60 points (95.5% liked)

Technology

83529 readers
739 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS