484
Lavalamp too hot (discuss.tchncs.de)
submitted 3 weeks ago* (last edited 3 weeks ago) by swiftywizard@discuss.tchncs.de to c/programmer_humor@programming.dev
top 50 comments
sorted by: hot top controversial new old
[-] stsquad@lemmy.ml 122 points 3 weeks ago

If you have ever read the "thought" process on some of the reasoning models you can catch them going into loops of circular reasoning just slowly burning tokens. I'm not even sure this isn't by design.

[-] swiftywizard@discuss.tchncs.de 79 points 3 weeks ago

I dunno, let's waste some water

[-] gtr@programming.dev 7 points 3 weeks ago

They are trying to get rid of us by wasting our resources.

[-] MajorasTerribleFate@lemmy.zip 13 points 3 weeks ago

So, it's Nestlé behind things again.

[-] SubArcticTundra@lemmy.ml 20 points 3 weeks ago

I'm pretty sure training is purely result oriented so anything that works goes

[-] MotoAsh@piefed.social 3 points 3 weeks ago

Exactly why this shit is and will never be trustworthy.

[-] Feathercrown@lemmy.world 9 points 3 weeks ago

Why would it be by design? What does that even mean in this context?

[-] MotoAsh@piefed.social 6 points 3 weeks ago

You have to pay for tokens on many of the "AI" tools that you do not run on your own computer.

[-] Feathercrown@lemmy.world 8 points 3 weeks ago* (last edited 3 weeks ago)

Hmm, interesting theory. However:

  1. We know this is an issue with language models, it happens all the time with weaker ones - so there is an alternative explanation.

  2. LLMs are running at a loss right now, the company would lose more money than they gain from you - so there is no motive.

[-] jerkface@lemmy.ca 3 points 3 weeks ago

it was proposed less as a hypothesis about reality than as virtue signalling (in the original sense)

load more comments (1 replies)
load more comments (8 replies)
load more comments (4 replies)
[-] deHaga@feddit.uk 4 points 3 weeks ago

Compute costs?

[-] dream_weasel@sh.itjust.works 4 points 3 weeks ago

This kind of stuff happens on any model you train from scratch even before training for multi step reasoning. It seems to happen more when there's not enough data in the training set, but it's not an intentional add. Output length is a whole deal.

[-] Darohan@lemmy.zip 79 points 3 weeks ago
[-] Kyrgizion@lemmy.world 54 points 3 weeks ago

Attack of the logic gates.

[-] ideonek@piefed.social 35 points 3 weeks ago
[-] FishFace@piefed.social 109 points 3 weeks ago

LLMs work by picking the next word* as the most likely candidate word given its training and the context. Sometimes it gets into a situation where the model's view of "context" doesn't change when the word is picked, so the next word is just the same. Then the same thing happens again and around we go. There are fail-safe mechanisms to try and prevent it but they don't work perfectly.

*Token

[-] ideonek@piefed.social 20 points 3 weeks ago

That was the answer I was looking for. So it's simmolar to "seahorse" emoji case, but this time.at some point he just glitched that most likely next world for this sentence is "or" and after adding the "or" is also "or" and after adding the next one is also "or", and after a 11th one... you may just as we'll commit. Since thats the same context as with 10.

Thanks!

load more comments (22 replies)
[-] MonkderVierte@lemmy.zip 5 points 3 weeks ago

I've got it once in a "while it is not" "while it is" loop.

load more comments (4 replies)
[-] ch00f@lemmy.world 56 points 3 weeks ago

Gemini evolved into a seal.

[-] kamenlady@lemmy.world 12 points 3 weeks ago

or simply, or

[-] Arghblarg@lemmy.ca 31 points 3 weeks ago

LLM showed its true nature, probabilistic bullshit generator that got caught in a strange attractor of some sort within its own matrix of lies.

[-] ech@lemmy.ca 28 points 3 weeks ago

It's like the text predictor on your phone. If you just keep hitting the next suggested word, you'll usually end up in a loop at some point. Same thing here, though admittedly much more advanced.

[-] vaultdweller013@sh.itjust.works 3 points 3 weeks ago

Example of my phone doing this.

I just want you are the only reason that you can't just forget that I don't have a way that I have a lot to the word you are not even going on the phone and you can call it the other way to the other one I know you are going out to talk about the time you are not even in a good place for the rest they'll have a little bit more mechanically and the rest is.

You can see it looping pretty damned quick with me just hitting the first suggestion after the initial I.

[-] MrScottyTay@sh.itjust.works 3 points 3 weeks ago

I think I will be in the office tomorrow so I can do it now and then I can do it now and then I can do it for you and your dad and dad and dad and dad and dad and dad and dad and dad and dad and dad

That was mine haha

[-] palordrolap@fedia.io 18 points 3 weeks ago

Unmentioned by other comments: The LLM is trying to follow the rule of three because sentences with an "A, B and/or C" structure tend to sound more punchy, knowledgeable and authoritative.

Yes, I did do that on purpose.

[-] Cevilia@lemmy.blahaj.zone 12 points 3 weeks ago

Not only that, but also "not only, but also" constructions, which sound more emphatic, conclusive, and relatable.

[-] luciferofastora@feddit.org 2 points 3 weeks ago

I used to think learning stylistic devices like this was just an idle fancy, a tool simply designed to analyse poems, one of the many things you're most certain you'll never need but have to learn in school.

What a fool I've been.

[-] kogasa@programming.dev 17 points 3 weeks ago

Turned into a sea lion

[-] ChaoticNeutralCzech@feddit.org 20 points 3 weeks ago* (last edited 3 weeks ago)

Nah, too cold. It stopped moving and the computer can't generate any more random numbers to pick from the LLM's weighted suggestions. Similarly, some LLMs have a setting called "heat": too cold and the output is repetitive, unimaginative and overly copying input (like sentences written by first autocomplete suggestions), too hot and it is chaos: 98% nonsense, 1% repeat of input, 1% something useful.

[-] squirrel@piefed.kobel.fyi 9 points 3 weeks ago
[-] lividweasel@lemmy.world 8 points 3 weeks ago
[-] rockerface@lemmy.cafe 6 points 3 weeks ago

Platinum, even. Star Platinum.

[-] MotoAsh@piefed.social 2 points 3 weeks ago

I don't see no 'a's between those 'or's for the full "ora ora ora ora" effect.

[-] ZILtoid1991@lemmy.world 8 points 3 weeks ago

Five Nights at Altman's

[-] RVGamer06@sh.itjust.works 7 points 3 weeks ago

O cholera, czy to Freddy Fazbear?

[-] jwt@programming.dev 5 points 3 weeks ago

Reminds me of that "have you ever had a dream" kid.

[-] kamen@lemmy.world 4 points 3 weeks ago

If software was your kid.

Credit: Scribbly G

[-] DylanMc6@lemmy.dbzer0.com 2 points 3 weeks ago

The AI touched that lava lamp

load more comments
view more: next ›
this post was submitted on 25 Jan 2026
484 points (97.5% liked)

Programmer Humor

29783 readers
1382 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS