126
submitted 3 days ago by solo@slrpnk.net to c/technology@beehaw.org
you are viewing a single comment's thread
view the rest of the comments
[-] melroy@kbin.melroy.org 28 points 2 days ago

Well by design ai is always hallucinating. Lol. That is how they work. Basically trying to hallucinate and predict the next word / token.

[-] vintageballs@feddit.org 8 points 2 days ago

No, at least not in the sense that "hallucination" is used in the context of LLMs. It is specifically used to differentiate between the two cases you jumbled together: outputting correct information (as is represented in the training data) vs outputting "made-up" information.

A language model doesn't "try" anything, it does what it is trained to do - predict the next token, yes, but that is not hallucination, that is the training objective.

Also, though not widely used, there are other types of LLMs, e.g. diffusion-based ones, which actually do not use a next token prediction objective and rather iteratively predict parts of the text in multiple places at once (Llada is one such example). And, of course, these models also hallucinate a bunch if you let them.

Redefining a term to suit some straw man AI boogeyman hate only makes it harder to properly discuss these issues.

this post was submitted on 10 May 2025
126 points (100.0% liked)

Technology

38642 readers
494 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS