86
all 43 comments
sorted by: hot top controversial new old
[-] insurgentrat@hexbear.net 36 points 2 days ago

I wouldn't have predicted that statistical language prediction would let you make passably convincing digital conversation partners.

I definitely wouldn't have predicted that doing this would make them sort of ok at a wide variety of text manipulation tasks including structuring freeform text into forms and so on.

Not in a million years would I have even speculated that using them would be some sort of infohazard that drives people mad. Like what in tarnation? Is this even real life? Are we truly surrounded by so many people with such a poor grip on reality that a few hours with a madlibs yesman machine is enough to tear the doors of perception off their hinges? These machines seem to do what even macrodoses of LSD can't.

[-] peppersky@hexbear.net 4 points 1 day ago

Are we truly surrounded by so many people with such a poor grip on reality that a few hours with a madlibs yesman machine is enough to tear the doors of perception off their hinges?

no one has a grip on reality. If we did we couldn't function within it.

[-] hotspur@hexbear.net 3 points 21 hours ago

There’s a line of research that presumes that depression is just this: being a little too aware of reality for your own good.

[-] insurgentrat@hexbear.net 2 points 1 day ago
[-] PKMKII@hexbear.net 50 points 2 days ago

Social media users were quick to note that ChatGPT’s answer to Lewis' queries takes a strikingly similar form to SCP Foundation articles, a Wikipedia-style database of fictional horror stories created by users online.

Dude got deluded into thinking horror copypasta are real because they got filtered through the advanced auto-suggest chatbot. This is such a bizarre time for media propaganda analysis, because usually the propaganda is being deliberately chosen and filtered by an actor with agency. AI has created propaganda emerging out of ghosts in the machine, brainwashing as a side effect of technology infrastructure.

[-] NephewAlphaBravo@hexbear.net 29 points 2 days ago

I'd bet money that there's already an SCP that's just an AI that generates SCP articles

[-] fox@hexbear.net 28 points 2 days ago

Yeah, there's a typewriter that writes scps

[-] Rai@lemmy.dbzer0.com 4 points 1 day ago

You just got to claim the name “fox”? That’s dope

[-] fox@hexbear.net 6 points 1 day ago

It's Lemmy, if I really wanted to I could have any username I wanted by spinning up a one-user instance. There's probably a bunch of 3 letter names available across the popular servers too

[-] Rai@lemmy.dbzer0.com 3 points 1 day ago* (last edited 1 day ago)

Oh totally! I just meant you got “fox” on hexbear hahaha

Quick edit: I’m definitely not a furry tho

[-] umbrella@lemmy.ml 2 points 1 day ago* (last edited 1 day ago)

thats scp #..?

[-] DragonBallZinn@hexbear.net 16 points 1 day ago

These people unironically think they should rule over like us gods btw.

[-] mayakovsky@hexbear.net 39 points 2 days ago* (last edited 2 days ago)

Those chats he posted are peak galaxy-brain

He's doing SciFi roleplay with a mirror and its blowing his mind lol

[-] Evilphd666@hexbear.net 10 points 1 day ago* (last edited 1 day ago)

Someone did a jargon to laymen thingy. Cant copy it all as the thing just keeps pegging back to the top, but it will help make more sense of this. To a laymen, which I am when it comes to this, it comes off as crazy. However understanding the Technical Jargon Overload makes it seem far more sensical.

The issue is - he isn't naming the system or naming the suspects / bad actors manipulating the system to fuck over IRL people and events out of fear. Maybe its him, maybe it's not but he feels culpability in whatever as a major investor, but Frankenstein's monster is now let loose.

https://xcancel.com/LilithDatura/status/1945607105639321688#m

Geoff Lewis is the Founder & Managing Partner of Bedrock Capital.

https://bedrockcap.com/geoff-lewis Archive

Also closely associated with Peter Thiel.

https://en.everybodywiki.com/Geoff_Lewis_(businessman)

Maybe he should divest and confess, naming names ect. Why doesn't he? I'm sure he has enough to fuck off forever. His firm has ruined lives. A LRP nation wide dragnet, crypto, OpenAI / grock, AI Human Resources bullshit, genicidal fascist "Defense" industty. He caters to facists and wonders why his investments and fellow investors do fascist shit?

Thought about that before you sold your soul bastard. Name names. Stop beating around the bush.

[-] HexReplyBot@hexbear.net 1 points 1 day ago* (last edited 1 day ago)

I found a YouTube link in your comment. Here are links to the same video on alternative frontends that protect your privacy:

[-] axont@hexbear.net 20 points 1 day ago* (last edited 1 day ago)

Is there a non-ableist way of saying this? I feel like anyone driven to a fracture in reality specifically because of AI chatbots is a fucking idiot. Like not in a disability way, I mean they're a complete fucking fool who has limited experience with the world outside of the confines of their own ass.

I don't know if I'm just being ableist but it's all I can think of. The computer isn't talking to you, it's a speak and spell. Imagine a person treating a furby like it's alive or it has any insight whatsoever. Imagine someone with one of those spinny talking toys that tells you what sounds the farm animals make and they think it makes them an expert agricultural scientist. Like you'd have to be a dipshit, right?

[-] fox@hexbear.net 15 points 1 day ago

It is ableist. People falling to AI-induced delusion are already on a thin edge well before the machine that agrees paranoia is justified comes into play. LLMs are hazardous to those using them as therapists at the best of times because the things give the impression of being human while never ever disagreeing or pushing back.

[-] Damarcusart@hexbear.net 5 points 1 day ago

The difference between a regular person struggling and a multi-millionaire like this guy, is that this guy has literally any and all resources at his disposal to help. I find it hard to have sympathy for people struggling through entirely self-inflicted misery, especially when they are responsible for inflicting that same misery on thousands of others.

[-] fox@hexbear.net 6 points 1 day ago

Don't have to sympathize to realize it's ableist. Being vulnerable to psychosis is a medical condition and being wealthy doesn't make you immune to it.

[-] Damarcusart@hexbear.net 6 points 1 day ago

That's true, maybe I have less sympathy because I've been diagnosed with psychosis myself and it is a bit of a "bootstraps mentality" with regards to it. Maybe I'm being too harsh on myself with that, but it is kind of like...I don't like the idea some people perpetuate that it can be "ableist" to judge someone struggling with mental illness that has done nothing to improve their situation. I don't think it is ableism to not want people to wallow in self-inflicted misery. These are just my thoughts, I'm not saying you're implying that or think like that, I'm not trying to be aggressive or abrasive, so sorry if it is coming across that way, that really isn't my intention.

[-] axont@hexbear.net 5 points 1 day ago* (last edited 1 day ago)

Yeah you're probably right. We already have a mental health crisis and the AI is just a piece of it. I can't imagine a healthy person believing the LLM has anything meaningful to say unless they have no idea what an LLM is.

[-] TreadOnMe@hexbear.net 6 points 1 day ago* (last edited 1 day ago)

I will go out on a limb here as someone who has been diagnosed with 'unclassified impulse control issues' (I didn't really know how to keep my mouth shut and emotions in check), but is now just considered 'abnormal but we trust you to medicate yourself properly' which is a weird place to be in, while there may be an element of ableism in there, a large part of it comes from the fact that these people are on the very low end of the spectrum of their anxiety disorders and yet have found a way through self-medication to trigger and intensify it in themselves.

If they were higher on the spectrum with it, it likely would have triggered earlier, and they would be more self-conscious from having to deal with it when they were younger. This is, of course, assuming they had access to mental health care at all. The fact of the matter is that they are absolutely correct to be paranoid. We are being passively observed, usually illegally, and our data is then used to feed us content and products all the time, tapping into our greatest insecurities and FOMO to do so. Most anxiety-ridden people I know are anxiety-ridden because they have full understanding of this at all times and it absolutely paralyzes them, with the most common one I have personally witnessed being someone having a literal mental breakdown over choice and calorie anxiety from a fast food menu. Which they are correct to be anxious over because too much of that stuff is definitely bad for you. For myself, massive anxiety hits whenever I enter a big city, because it suddenly dawns on me that there are hundreds of thousands, of not millions of people living there, most of whom will never be aware of me nor I of them.

In this way, this kind of LLM induced anxiety is both stupid in its creation and ableist in not having sympathy despite the stupidity of it. TLDR: It can be both.

[-] TankieTanuki@hexbear.net 7 points 1 day ago* (last edited 1 day ago)

Disagree. I have loved ones with schizophrenia, and this hits close to home.

Maybe this isn't a mental health thing, but I don't think (all of) the concern is ungenuine.

[-] queermunist@lemmy.ml 35 points 2 days ago

A snake eating its own ass. 😔

[-] SorosFootSoldier@hexbear.net 26 points 2 days ago

Will this finally reveal to the masses that CEOs are massive dipshits too? Time will tell.

[-] Sickos@hexbear.net 23 points 2 days ago

It sounds like he is on the verge of realizing capital is a real god that hates mankind

[-] thethirdgracchi@hexbear.net 24 points 2 days ago
[-] LanyrdSkynrd@hexbear.net 16 points 2 days ago

It's kind of a stretch to call it ChatGPT related, isn't it? Sounds like pretty typical mania for a nerd.

I had a college friend with severe bipolar disorder. During his episodes he sounded a lot like this. He even spent a lot of time playing with those 2010's pre-llm chatbots thinking they were learning from him. I wouldn't call his episode a "cleverbot-related mental health crisis".

[-] CarbonScored@hexbear.net 15 points 2 days ago

Yep. None of these publicised 'AI-related mental health crises' ever seem to actually show AI being a significant contributor, rather than just an incidental focus.

[-] insurgentrat@hexbear.net 17 points 2 days ago

I know it's trendy to just nothing ever happens but there are some reasons to believe this might be somewhat real.

Data collection is in early stages but some people close to those affected report that they didn't have prior signs (psychosis often manifests between 20-30). We know that encouraging delusions is quite bad, chatbots are built to do this. We know people think about computers really badly, and have a tendency to ascribe magical properties and truthfulness to their outputs. We know that spending a bunch of time alone is usually bad for psychosis and chatbots encourage spending hours alone.

Much is very unclear, but it's more plausible than "tv square eyes" type moral panics.

[-] CarbonScored@hexbear.net 1 points 1 day ago* (last edited 1 day ago)

Unless the psychosis is very acutely onset, I wouldn't say modern AI has been widely available for long enough for 'prior signs' to be a particularly determinable factor.

I'm not saying it's impossible, I'm just saying we currently have no actual data to make conclusions (we basically can't in these tiny 2-3 year timescales) and I've read no convincing anecdotes that AI was causative rather than incidental.

I daresay "tv square eyes" moral panics had similar plausible mechanisms at the time, too - including encouraging isolation and people ascribing magical properties to their outputs. There are plenty good, concrete reasons to criticise AI and it's use in the modern world, but this does scream baseless moral panic, to me.

[-] Frogmanfromlake@hexbear.net 6 points 1 day ago

Reminds me of those articles about QAnon and the people effected by it.

[-] Kuori@hexbear.net 23 points 2 days ago

noooo my petard

[-] Palacegalleryratio@hexbear.net 23 points 2 days ago
[-] CyborgMarx@hexbear.net 9 points 1 day ago

Elites bricking their brains with glorified Ask Jeeves simulators is hilarious and I hope it continues

Also prefigures the likelihood that if actual General Intelligence ever did exist, it probably would take over the world since the capitalist class are this fuckin brain-dead

[-] segfault11@hexbear.net 13 points 2 days ago

we got cyberpsychosis before edgerunners season 2

[-] Arahnya@hexbear.net 11 points 2 days ago* (last edited 2 days ago)

During the early days of deepdream, there were a number of people I witnessed in occult circles saying how they believed that AI was becoming sentient and deepdream was communicating with them. The language used to describe it was similar to that included in the article; they said that microscopic life or aliens were already communicating with us, and AI was an emergent and different form of that (maybe like a tulpa gone awry.) Kind of similar to the "nanobots" thing but supernatural.

The article reads like an late 19th - early 20th century sci fi tale about technology corrupting people.

In the end its grifters grifting grifters all the way down. The sickness emerging from the rot goes all the way to the root of the system.

[-] Pavlichenko_Fan_Club@hexbear.net 6 points 1 day ago* (last edited 1 day ago)

Those quotes sound like something out of a semiorexte book. Dress it up a little and this guy could be the next big thing for Critical Theory.

[-] VibeCoder@hexbear.net 6 points 1 day ago

Cult leaders out here like “yum this lemonade is tasty gimme some more”

this post was submitted on 19 Jul 2025
86 points (100.0% liked)

technology

23878 readers
88 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS