484
Lavalamp too hot (discuss.tchncs.de)
submitted 1 week ago* (last edited 1 week ago) by swiftywizard@discuss.tchncs.de to c/programmer_humor@programming.dev
top 50 comments
sorted by: hot top controversial new old
[-] kamen@lemmy.world 4 points 6 days ago

If software was your kid.

Credit: Scribbly G

[-] DylanMc6@lemmy.dbzer0.com 2 points 1 week ago

The AI touched that lava lamp

[-] stsquad@lemmy.ml 122 points 1 week ago

If you have ever read the "thought" process on some of the reasoning models you can catch them going into loops of circular reasoning just slowly burning tokens. I'm not even sure this isn't by design.

[-] swiftywizard@discuss.tchncs.de 79 points 1 week ago

I dunno, let's waste some water

[-] gtr@programming.dev 7 points 1 week ago

They are trying to get rid of us by wasting our resources.

[-] MajorasTerribleFate@lemmy.zip 13 points 1 week ago

So, it's Nestlé behind things again.

[-] SubArcticTundra@lemmy.ml 20 points 1 week ago

I'm pretty sure training is purely result oriented so anything that works goes

load more comments (1 replies)
[-] Feathercrown@lemmy.world 9 points 1 week ago

Why would it be by design? What does that even mean in this context?

[-] MotoAsh@piefed.social 6 points 1 week ago

You have to pay for tokens on many of the "AI" tools that you do not run on your own computer.

[-] Feathercrown@lemmy.world 8 points 1 week ago* (last edited 1 week ago)

Hmm, interesting theory. However:

  1. We know this is an issue with language models, it happens all the time with weaker ones - so there is an alternative explanation.

  2. LLMs are running at a loss right now, the company would lose more money than they gain from you - so there is no motive.

load more comments (10 replies)
load more comments (4 replies)
load more comments (1 replies)
load more comments (1 replies)
[-] Darohan@lemmy.zip 79 points 1 week ago
[-] Kyrgizion@lemmy.world 54 points 1 week ago

Attack of the logic gates.

[-] ideonek@piefed.social 35 points 1 week ago
[-] FishFace@piefed.social 109 points 1 week ago

LLMs work by picking the next word* as the most likely candidate word given its training and the context. Sometimes it gets into a situation where the model's view of "context" doesn't change when the word is picked, so the next word is just the same. Then the same thing happens again and around we go. There are fail-safe mechanisms to try and prevent it but they don't work perfectly.

*Token

[-] ideonek@piefed.social 20 points 1 week ago

That was the answer I was looking for. So it's simmolar to "seahorse" emoji case, but this time.at some point he just glitched that most likely next world for this sentence is "or" and after adding the "or" is also "or" and after adding the next one is also "or", and after a 11th one... you may just as we'll commit. Since thats the same context as with 10.

Thanks!

load more comments (22 replies)
[-] MonkderVierte@lemmy.zip 5 points 1 week ago

I've got it once in a "while it is not" "while it is" loop.

load more comments (4 replies)
[-] ch00f@lemmy.world 56 points 1 week ago

Gemini evolved into a seal.

[-] kamenlady@lemmy.world 12 points 1 week ago

or simply, or

[-] Arghblarg@lemmy.ca 31 points 1 week ago

LLM showed its true nature, probabilistic bullshit generator that got caught in a strange attractor of some sort within its own matrix of lies.

[-] ech@lemmy.ca 28 points 1 week ago

It's like the text predictor on your phone. If you just keep hitting the next suggested word, you'll usually end up in a loop at some point. Same thing here, though admittedly much more advanced.

load more comments (2 replies)
[-] palordrolap@fedia.io 18 points 1 week ago

Unmentioned by other comments: The LLM is trying to follow the rule of three because sentences with an "A, B and/or C" structure tend to sound more punchy, knowledgeable and authoritative.

Yes, I did do that on purpose.

[-] Cevilia@lemmy.blahaj.zone 12 points 1 week ago

Not only that, but also "not only, but also" constructions, which sound more emphatic, conclusive, and relatable.

load more comments (1 replies)
[-] kogasa@programming.dev 17 points 1 week ago

Turned into a sea lion

[-] ChaoticNeutralCzech@feddit.org 20 points 1 week ago* (last edited 1 week ago)

Nah, too cold. It stopped moving and the computer can't generate any more random numbers to pick from the LLM's weighted suggestions. Similarly, some LLMs have a setting called "heat": too cold and the output is repetitive, unimaginative and overly copying input (like sentences written by first autocomplete suggestions), too hot and it is chaos: 98% nonsense, 1% repeat of input, 1% something useful.

[-] squirrel@piefed.kobel.fyi 9 points 1 week ago
[-] ZILtoid1991@lemmy.world 8 points 1 week ago

Five Nights at Altman's

[-] lividweasel@lemmy.world 8 points 1 week ago
[-] rockerface@lemmy.cafe 6 points 1 week ago

Platinum, even. Star Platinum.

load more comments (1 replies)
[-] RVGamer06@sh.itjust.works 7 points 1 week ago

O cholera, czy to Freddy Fazbear?

[-] jwt@programming.dev 5 points 1 week ago

Reminds me of that "have you ever had a dream" kid.

load more comments
view more: next ›
this post was submitted on 25 Jan 2026
484 points (97.5% liked)

Programmer Humor

29147 readers
687 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS