81
submitted 6 days ago* (last edited 6 days ago) by furrowsofar@beehaw.org to c/technology@beehaw.org

Another AI fail. Letting AI write code and modify your file system without sandboxing and buckups. What could go wrong?

you are viewing a single comment's thread
view the rest of the comments
[-] megopie@beehaw.org 2 points 5 days ago* (last edited 5 days ago)

Exactly, They’re just probabilistic models. LLMs are just outputting something that statistically could be what comes next. But that statistical process does not capture any real meaning or conceptualization, just vague associations of when words are likely to show up, and what order they’re likely to show up in.

What people call hallucinations are just the system functional capability diverging from their expectation of what it is doing. Expecting it to think and understand, when all it is doing is outputting a statistically likely continuation.

this post was submitted on 27 Jul 2025
81 points (98.8% liked)

Technology

39791 readers
173 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS