10
top 1 comments
sorted by: hot top controversial new old
[-] peeonyou@hexbear.net 8 points 8 hours ago* (last edited 8 hours ago)

This is what is hard to explain to people or to have people truly understand.
I keep having to remind my partner that LLMs don't understand anything. They're just spitting out words based on the statistical probability of those words appearing in certain orders according to what they've been trained on. They can, and will, spit out complete garbage. Time and again people seem to forget that these models don't actually know or understand things and it's good to be reminded of that before putting any sort of trust into what you get back from them.

this post was submitted on 17 Aug 2025
10 points (100.0% liked)

technology

23915 readers
242 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS