83
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 23 Jul 2023
83 points (100.0% liked)
Technology
37889 readers
206 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 3 years ago
MODERATORS
It is learning from humans afterall
The problem in this case is specifically that it is not trained on humans. The LLM hallucinates nonsense and then recursively reads its own nonsense from the internet in some kind of shit Ouroboro. This problem doesn't seem solvable with LLM and contrasts it with AGI (artificial general intelligence - the thing LLMs claim to be.) Since the model needs more and more data to build more satisfying responses, it's susceptible to injesting from other LLMs.
A LLM must ONLY be trained on humans because they don't actually understand reasoning or linguistic structure. You end up with a "invisible green dragons sleep furiously" response very quickly. But the LLM also can't tell if the text it's injesting is from a human or LLM.
The major GPT-based systems will deny being AGIs. Most companies will also deny this. Is anyone reputable saying LLMs are AGIs?