484
Lavalamp too hot (discuss.tchncs.de)
submitted 3 weeks ago* (last edited 3 weeks ago) by swiftywizard@discuss.tchncs.de to c/programmer_humor@programming.dev
you are viewing a single comment's thread
view the rest of the comments
[-] bunchberry@lemmy.world 1 points 3 weeks ago

This happened to me a lot when I tried to run big models with low context windows. It would effectively run out of memory so each new token wouldn't actually be added to the context so it would just get stuck in an infinite loop repeating the previous token. It is possible that there was a memory issue on Google's end.

[-] FishFace@piefed.social 1 points 3 weeks ago

There is something wrong if it's not discarding old context to make room for new

[-] bunchberry@lemmy.world 1 points 3 weeks ago

At least llama.cpp doesn't seem to do that by default. If it overruns the context window it just blorps.

[-] FishFace@piefed.social 1 points 3 weeks ago

I think there are parameters for that, from googling.

this post was submitted on 25 Jan 2026
484 points (97.5% liked)

Programmer Humor

29845 readers
440 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS