151
you are viewing a single comment's thread
view the rest of the comments
[-] Veraticus@lib.lgbt 21 points 1 year ago* (last edited 1 year ago)

I was mostly posting this because the last time LLMs came up, people kept on going on and on about how much their thoughts are like ours and how they know so much information. But as this article makes clear, they have no thoughts and know no information.

In many ways they are simply a mathematical party trick; formulas trained on so much language, they can produce language themselves. But there is no “there” there.

[-] lily33@lemm.ee 11 points 1 year ago* (last edited 1 year ago)

have no thoughts

True

know no information

False. There's plenty of information stored in the models, and plenty of papers that delve into how it's stored, or how to extract or modify it.

I guess you can nitpick over the work "know", and what it means, but as someone else pointed out, we don't actually know what that means in humans anyway. But LLMs do use the information stored in context, they don't simply regurgitate it verbatim. For example (from this article):

If you ask an LLM what's near the Eiffel Tower, it'll list location in Paris. If you edit its stored information to think the Eiffel Tower is in Rome, it'll actually start suggesting you sights in Rome instead.

[-] Veraticus@lib.lgbt 6 points 1 year ago

They only use words in context, which is their problem. It doesn't know what the words mean or what the context means; it's glorified autocomplete.

I guess it depends on what you mean by "information." Since all of the words it uses are meaningless to it (it doesn't understand anything of what it either is asked or says), I would say it has no information and knows nothing. At least, nothing more than a calculator knows when it returns 7 + 8 = 15. It doesn't know what those numbers mean or what it represents; it's simply returning the result of a computation.

So too LLMs responding to language.

[-] lily33@lemm.ee 4 points 1 year ago* (last edited 1 year ago)

Why is that a problem?

For example, I've used it to learn the basics of Galois theory, and it worked pretty well.

  • The information is stored in the model, do it can tell me the basics
  • The interactive nature of taking to LLM actually helped me learn better than just reading.
  • And I know enough general math so I can tell the rare occasions (and they indeed were rare) when it makes things up.
  • Asking it questions can be better than searching Google, because Google needs exact keywords to find the answer, and the LLM can be more flexible (of course, neither will answer if the answer isn't in the index/training data).

So what if it doesn't understand Galois theory - it could teach it to me well enough. Frankly if it did actually understand it, I'd be worried about slavery.

[-] Veraticus@lib.lgbt 2 points 1 year ago

Basically the problem is point 3.

You obviously know some of what it's telling you is inaccurate already. There is the possibility it's all bullshit. Granted a lot of it probably isn't, but it will tell you the bullshit with the exact same level of confidence as actual facts... because it doesn't know Galois theory and it isn't teaching it to you, it's simply stringing sentences together in response to your queries.

If a human were doing this we would rightly proclaim the human a bad teacher that didn't know their subject, and that you should go somewhere else to get your knowledge. That same critique should apply to the LLM as well.

That said it definitely can be a useful tool. I just would never fully trust knowledge I gained from an LLM. All of it needs to be reviewed for correctness by a human.

[-] lily33@lemm.ee 4 points 1 year ago

That same critique should apply to the LLM as well.

No, it shouldn't. Instead, you should compare it to the alternatives you have on hand.

The fact is,

  • Using LLM was a better experience for me then reading a textbook.
  • And it was also a better experience for me then watching recorded video lectures.

So, if I have to learn something, I have enough background to spot hallucinations, and I don't have a teacher (having graduated college, that's always true), I would consider using it, because it's better then the alternatives.

I just would never fully trust knowledge I gained from an LLM

There are plenty of cases where you shouldn't fully trust knowledge you gained from a human, too.

And there are, actually, cases where you can trust the knowledge gained from an LLM. Not because it sounds confident, but because you know how it behaves.

[-] Veraticus@lib.lgbt 1 points 1 year ago* (last edited 1 year ago)

Obviously you should do what you think is right, so I mean, I'm not telling you you're living wrong. Do what you want.

The reason to not trust a human is different from the reasons not to trust an LLM. An LLM is not revealing to you knowledge it understands. Or even knowledge it doesn't understand. It's literally completing sentences based on word likelihood. It doesn't understand any of what it's saying, and none of it is rooted in any knowledge of the subject of any kind.

I find that concerning in terms of learning from it. But if it worked for you, then go for it.

[-] sincle354@beehaw.org 10 points 1 year ago

Sadly we don't even know what "knowing" is, considering human memory changes every time it is accessed. We might just need language and language only. Right now they're testing if generating verbalized trains of thought helps (it might?). The question might change to: Does the sum total of human language have enough consistency to produce behavior we might call consciousness? Can we brute force the Chinese room with enough data?

[-] pbjamm@beehaw.org 7 points 1 year ago

They are the perfect embodiment of the internet.

They know everything, but understand nothing

this post was submitted on 12 Sep 2023
151 points (100.0% liked)

Technology

37708 readers
157 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS