1031
you are viewing a single comment's thread
view the rest of the comments
[-] expatriado@lemmy.world 83 points 5 months ago

there is going to be a lot of ghosting during the ai crash

[-] NotSteve_@piefed.ca 63 points 5 months ago

It already sort of happened with GPT5, people were contemplating suicide over the AI becoming less personable

[-] LifeInMultipleChoice@lemmy.dbzer0.com 21 points 5 months ago* (last edited 5 months ago)

I don't want to give grok/such the traffic but is it much less personable? Like did some of these people become friends with an AI, then get forced to move to an AI that treats them worse?

I've never tried to have a conversation with one like that. I did try to ask one to help me figure out what was wrong with a docker container I was trying to set up. Think I ended up just tossing it and starting from scratch after I had clearly set something up wrong initially, then got the AI just went in loops trying to get me to try the same things over and over. Haven't tried them recently

[-] ButteryMonkey@piefed.social 21 points 5 months ago

I read an article about that and apparently the change to the model made the ai stop responding to affectionate solicitations with the same sort of affectionate tone in reply. It got more businesslike, and less intimate.

[-] BeigeAgenda@lemmy.ca 12 points 5 months ago

I definitely got weirded out asking a GPT3 model about something and it got clingy.

Now I see it more like a search engine, skim the wall of text to find the useful information. Today I gave it a lot of context and explained what I had done and the error I got and it more or less told me you did everything correctly, and suggested stuff I already tried. It's way of saying "I don't know".

[-] DrDystopia@lemy.lol 10 points 5 months ago

Like did some of these people become friends with an AI, then get forced to move to an AI that treats them worse?

Basically yes, it's the normie version of testing out different models and finding one they like.

And "friends"? That's a bit oversimplification, but when I tested out models on my personal AI rig I could use all sorts of models and write my own system prompts. Using a default character sheet as the benchmark, various models gave off vastly different "personalities" in their answer.

Some of them I liked so much I went back to the "nice" models after testing various others, not too concerned with quantization accuracy or parameter size. But I tested for personality, creativity, empathic mimicry and so on - Not for factual answers. Not even the giant, up-to-date models are usable for facts.

Marvin the Paranoid Android powered by an uncencored sci-fi horror LLM is delightful fun, finally someone on my level of positive outlook on things!

this post was submitted on 10 Nov 2025
1031 points (98.2% liked)

Programmer Humor

30995 readers
1452 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS