86
you are viewing a single comment's thread
view the rest of the comments
[-] insurgentrat@hexbear.net 17 points 3 days ago

I know it's trendy to just nothing ever happens but there are some reasons to believe this might be somewhat real.

Data collection is in early stages but some people close to those affected report that they didn't have prior signs (psychosis often manifests between 20-30). We know that encouraging delusions is quite bad, chatbots are built to do this. We know people think about computers really badly, and have a tendency to ascribe magical properties and truthfulness to their outputs. We know that spending a bunch of time alone is usually bad for psychosis and chatbots encourage spending hours alone.

Much is very unclear, but it's more plausible than "tv square eyes" type moral panics.

[-] CarbonScored@hexbear.net 1 points 2 days ago* (last edited 2 days ago)

Unless the psychosis is very acutely onset, I wouldn't say modern AI has been widely available for long enough for 'prior signs' to be a particularly determinable factor.

I'm not saying it's impossible, I'm just saying we currently have no actual data to make conclusions (we basically can't in these tiny 2-3 year timescales) and I've read no convincing anecdotes that AI was causative rather than incidental.

I daresay "tv square eyes" moral panics had similar plausible mechanisms at the time, too - including encouraging isolation and people ascribing magical properties to their outputs. There are plenty good, concrete reasons to criticise AI and it's use in the modern world, but this does scream baseless moral panic, to me.

this post was submitted on 19 Jul 2025
86 points (100.0% liked)

technology

23881 readers
240 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS