7
submitted 1 month ago by MicroWave@lemmy.world to c/news@lemmy.world

Finding is one of most direct statements from the tech company on how AI can exacerbate mental health issues

More than a million ChatGPT users each week send messages that include “explicit indicators of potential suicidal planning or intent”, according to a blogpost published by OpenAI on Monday. The finding, part of an update on how the chatbot handles sensitive conversations, is one of the most direct statements from the artificial intelligence giant on the scale of how AI can exacerbate mental health issues.

In addition to its estimates on suicidal ideations and related interactions, OpenAI also said that about 0.07 of users active in a given week – about 560,000 of its touted 800m weekly users – show “possible signs of mental health emergencies related to psychosis or mania”. The post cautioned that these conversations were difficult to detect or measure, and that this was an initial analysis.

top 9 comments
sorted by: hot top controversial new old
[-] FaceDeer@fedia.io 3 points 1 month ago

I don't see anything in here to support saying ChatGPT is exacerbating anything.

[-] chunes@lemmy.world 4 points 1 month ago

Right? The reason people are opening up to it is that you can't open up to a human about this.

[-] j_0t@discuss.tchncs.de 1 points 1 month ago

I'm agree with you, today is more easy to open yourself to an AI that basically is a YES man, unfotunatelly that is the main problem from my point of view, how we expect every of our ideas should be accepted without any difficulty, in fact, this could mean a lack of essential hummanity.

[-] Perspectivist@feddit.uk 1 points 1 month ago* (last edited 1 month ago)

Exactly. It’s like concluding that therapists are exacerbating suicidal ideation, psychosis, or mania just because their patients talk about those things during sessions. ChatGPT has 800 million weekly users - of course some of them are going to bring up topics like that.

It’s fine to be skeptical about the long-term effects of chatbots on mental health, but it’s just as unhealthy to be so strongly anti-anything that one throws critical thinking out the window and accept anything that merely feels like it supports what they already want believe as further evidence that it must be so.

[-] ech@lemmy.ca 1 points 1 month ago* (last edited 1 month ago)

The fact they have data on this isn't surprising, but it should be horrifying for anyone using the platform. This company has the data from every sad, happy, twisted, horny, and depressing reply from every one of their users, and they're analyzing it. Best case, they're "only" using it to better manipulate users into staying longer on their apps. More likely they're using it for much more than that.

[-] Zak@lemmy.world 1 points 1 month ago* (last edited 1 month ago)

possible signs of mental health emergencies related to psychosis or mania

It can be amusing to test what triggers this response from LLMs. Perplexity will reliably do it if you propose sacrificing a person or animal to Satan, but not Ku-waha-ilo, the Hawaiian god of war, sorcery, and devourer of souls.

I imagine a large fraction of the conversations flagged this way are people doing that rather than actually having a mental health crisis.

[-] brucethemoose@lemmy.world 1 points 1 month ago* (last edited 1 month ago)

Preface: I love The Guardian, and fuck Altman.

But this is a bad headline.

Correlation is not causation. It’s disturbing that OpenAI even possesses, and has mined for these statistics, or that millions of people somehow think their ChatGPT app has any semblance of privacy, but I’m reading that millions reached out to ChatGPT with suicidal ideations.

Not that it’s the root cause.

The headline is that the mental health of the world sucks, not that ChatGPT inflamed the crisis all of the sudden. The Guardian should be ashamed of shoehorning in some “Fuck AI” article into that for clicks, when there are literally a million other malicious bits of OpenAI they could cover. This a sad story, sourced from an app that has an unprecedented (and disturbing) window into folks psyche en masse, they’ve twisted into clickbait.

[-] Kyrgizion@lemmy.world 1 points 1 month ago

"How can we monetize this?"

Just a matter of time before it recommends therapists in your area (that paid OpenAI to be suggested to you).

[-] Hyperrealism@lemmy.dbzer0.com 1 points 1 month ago

I think another potential use, is targeting and manipulating vulnerable people for political reasons.

Perhaps convince them to stay at home on election day. Perhaps convince members of undesirable demographics to disproportionately kill themselves. Perhaps make vulnerable people so paranoid or scared that they end up killing people you want to get rid of. Perhaps convince someone vulnerable to commit politically convenient violence, which can be used as a false flag or to rally support.

Why leave that kind of thing to chance, when you can use AI to tip the scales in your favour?

this post was submitted on 27 Oct 2025
7 points (100.0% liked)

News

33509 readers
1005 users here now

Welcome to the News community!

Rules:

1. Be civil


Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.


Obvious right or left wing sources will be removed at the mods discretion. Supporting links can be added in comments or posted seperately but not to the post body.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source.


Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.


5. Only recent news is allowed.


Posts must be news from the most recent 30 days.


6. All posts must be news articles.


No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.


7. No duplicate posts.


If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.


Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners.


The auto mod will contact you if a link shortener is detected, please delete your post if they are right.


10. Don't copy entire article in your post body


For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

founded 2 years ago
MODERATORS