10
LLMs aren’t world models
(yosefk.com)
This is what is hard to explain to people or to have people truly understand.
I keep having to remind my partner that LLMs don't understand anything. They're just spitting out words based on the statistical probability of those words appearing in certain orders according to what they've been trained on.
They can, and will, spit out complete garbage.
Time and again people seem to forget that these models don't actually know or understand things and it's good to be reminded of that before putting any sort of trust into what you get back from them.
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
Rules: