102
submitted 6 months ago by RGB@group.lt to c/technology@lemmy.world

Google rolled out AI overviews across the United States this month, exposing its flagship product to the hallucinations of large language models.

you are viewing a single comment's thread
view the rest of the comments
[-] adam_y@lemmy.world 9 points 6 months ago

Can we swap out the word "hallucinations" for the word "bullshit"?

I think all AI/LLM stuf should be prefaced as "someone down the pub said..."

So, "someone down the pub said you can eat rocks" or, "someone down the pub said you should put glue on your pizza".

Hallucinations are cool, shit like this is worthless.

[-] kbin_space_program@kbin.run 2 points 6 months ago

Google search isnt a hallucination now though.

It instead proves that LLMs just reproduce from the model they are supplied with. For example, the "glue on pizza" comment is from a reddit user called FuckSmith roughly 11 years ago.

[-] DarkThoughts@fedia.io 1 points 6 months ago

It instead proves that LLMs just reproduce from the model they are supplied with.

What do you mean by that? This isn't some secret but literally how LLMs work. lol What people mean by hallucinating is when LLMs "create" facts that aren't any. Be it this genius recipe of glue pizza, or any other wild combination of its model's source material. The whole cooking thing is a great analogy actually because it's like all of their fed information are the ingredients, and it just spits out various recipes based on those ingredients, without any guarantee that it is actually edible.

[-] richieadler@lemmy.myserv.one 1 points 6 months ago

This isn't some secret but literally how LLMs work. lol

Yeah, but John Q. Public reads AI and thinks HAL 9000 and Skynet, and no additional will convince them otherwise.

[-] kbin_space_program@kbin.run 0 points 6 months ago

There are a lot of people, including google itself, claiming that this behaviour is an isolated and basically blamed users for trolling them.

https://www.bbc.com/news/articles/cd11gzejgz4o

I was working on the concept of "hallucinations" being things returned that are unrelated to the input query, not directly part of the model as with the glue-pizza.

[-] DarkThoughts@fedia.io 0 points 6 months ago

Your link does not match your statement.

[-] kbin_space_program@kbin.run 0 points 6 months ago

A Google spokesperson told the BBC they were "isolated examples".

Some of the answers appeared to be based on Reddit comments or articles written by satirical site, The Onion.

But Google insisted the feature was generally working well.

"The examples we've seen are generally very uncommon queries, and aren’t representative of most people’s experiences," it said in a statement.

It said it had taken action where "policy violations" were identified and was using them to refine its systems.

That's precisely what they are saying.

[-] DarkThoughts@fedia.io -1 points 6 months ago

I'm sorry but reading this as "Google blames users for trolling them" is either pure mental gymnastics or mental illness.

[-] Eheran@lemmy.world 2 points 6 months ago

No, hallucination is a really good term. It can be super confident and seemingly correct but still completely made up.

[-] kbin_space_program@kbin.run 2 points 6 months ago

It is, but it isnt applicable in at least the glue-pizza situation as the probable source comment has been found on reddit.

A better use of the term might be how when you try to get Bing's image creator to make "Battletech" art, you just mostly get really obvious Warhammer 40k Space Marines and occasionally Iron Maiden album art.

[-] A_Very_Big_Fan@lemmy.world 2 points 6 months ago

I think delusion might be a better word. You can hallucinate and know it's not real

[-] adam_y@lemmy.world 1 points 6 months ago

My experience with certain chemicals suggests this is true.

[-] snooggums@midwest.social 1 points 6 months ago

That is just being WRONG.

[-] yukijoou@lemmy.blahaj.zone 0 points 6 months ago

for it to "hallucinate" things, it would have to believe in what it's saying. ai is unable to think - so it cannot hallucinate

[-] Eheran@lemmy.world 0 points 6 months ago

So how do you prove it can't think? Or that you actually can?

[-] yukijoou@lemmy.blahaj.zone 0 points 6 months ago

because it's a text generation machine..? i mean, i wouldn't say i can prove it, but i don't think anyone can prove it's capable of thinking, much less of reasoning

like, it can string together a coherent sentence thanks to well crafted equations, sure, but i wouldn't qualify that as "thinking", though i guess the definition of "thinking" is debatable

[-] Eheran@lemmy.world 0 points 6 months ago

It can tell you how to stack things on top of each other the best way to get a high tower. Etc.

Those are not random sentences. If you can not define thinking in a way this machine fails at, then stop saying it does not think.

[-] Aceticon@lemmy.world 0 points 6 months ago

A parrot can be trained to tell you how to stack things on top of each other the best way to get a high tower.

This is just an electronic parrot, millions of times faster to train than the biological parrot, specialized in repetition alone (can't really do anything else a parrot can) and which has been trained on billions of texts.

You're confusing one specific form in which humans externally express cogniscence with the actual cogniscence itself: just because intelligence can produce some forms of textual communication doesn't mean that the relationship holds in the opposite direction and such forms of textual communication require intelligence, or if you will, just because you can photograph a real pizza to get a picture of a pizza doesn't mean a picture of a pizza is actually of a real pizza and not something with glue to make it look like it has stringy melted cheese.

[-] Eheran@lemmy.world 0 points 6 months ago

Again, it is absolutely capable to come up with it's own logical stuff, hence my example. Stop saying it just copies existing stuff, that is simply wrong.

[-] Aceticon@lemmy.world 0 points 6 months ago
[-] Eheran@lemmy.world -1 points 6 months ago

Amazing reply, given the context.

[-] Aceticon@lemmy.world 0 points 6 months ago

I'm actually a domain expert on AI whilst your "assertive denial without a single counter-argument" answer to my simplified explanation together with your "understanding" of the subject matter shown in the post before that one, shows you're at the peak of the Dunning-Krugger curve on this domain and also that you do not use analytical thinking or the scientific method in any way form or shape when analysing a subject.

There is literally no point in explaining anything to somebody who reasons like that and is at that point of that curve.

You keep your strongly held "common sense" beliefs and I'll keep from wasting any more of my time.

[-] Eheran@lemmy.world -1 points 6 months ago

this paper clearly says it is capable of original thought. It also "speaks" of it in high regard in other things.

[-] Aceticon@lemmy.world 0 points 6 months ago

Re-read it: it says AI is capable of "originality" and does not mention "thought" at all.

You're the one presuming that "originality" requires cognition and hence understood "originality" as meaning "original thought" even though they're different concepts (specifically the latter is a subset of the former).

In your interpretation of that paper you did the exact same logical mistake as you seem to be doing in your interpretation of LLMs - you made assumptions backed only by gut feeling thus taking a leap to reach a conclusion ultimately supported only by your gut feeling.

[-] Eheran@lemmy.world -1 points 6 months ago

"electronic parrot" and outperforming almost all humans in creativity and originality is an extreme contrast to me, regardless of my misuse of terms. So I fail to understand what you want to say, since this contrast must be apparent to you too.

The original context of my comment was even more basic and to me proven by what the paper says: Those are not things it copied somewhere. Also, I still think there is no test to prove it can/can't think.

[-] yukijoou@lemmy.blahaj.zone 0 points 6 months ago

it is absolutely capable to come up with it’s own logical stuff

interesting, in my experience, it's only been good at repeating things, and failing on unexpected inputs - it's able to answer pretty accurately if a small number is even or odd, but not if it's a large number, which indicates it's not reasoning but parroting answers to me

do you have example prompts where it showed clear logical reasoning?

[-] Eheran@lemmy.world -1 points 6 months ago* (last edited 6 months ago)

Examples showing that it comes up with it's own solutions to an answer? Just ask it something that could not have been on the Internet before. Professor talking about AGI in GPT 4

Personal examples would be to code python to solve a 2D thermal heat flux problem given some context and constraints.

[-] yukijoou@lemmy.blahaj.zone 0 points 6 months ago

well, i just tried it, and its answer is meh --

i asked it to transcribe "zenquistificationed" (made up word) in IPA, it gave me /ˌzɛŋˌkwɪstɪfɪˈkeɪʃənd/, which i agree with, that's likely how a native english speaker would read that word.

i then asked it to transcribe that into japaense katakana, it gave me "ゼンクィスティフィカションエッド" (zenkwisuthifikashon'eddo), which is not a great transcription at all - based on its earlier IPA transcription, カション (kashon') should be ケーシュン (kēshun'), and the エッド (eddo) part at the end should just, not be there imo, or be shortened to just ド (do)

[-] Eheran@lemmy.world -1 points 6 months ago

this paper says it is capable of original thought. It also "speaks" of it in high regard in other things. That is also my experience using it for... over a year?! now.

[-] Jrockwar@feddit.uk -2 points 6 months ago

Hallucination is a technical term. Nothing to do with thinking. The scientific community could have chosen another term to describe the issue but hallucination explains really well what's happening.

[-] yukijoou@lemmy.blahaj.zone 1 points 6 months ago

huh, i kinda assumed it was a term made up/taken by journalists mostly, are there actual research papers on this using that term?

[-] TheBlackLounge@lemm.ee 2 points 6 months ago

It used to mean all generated output though. Calling only mistakes hallucinations is new, definitely because of hype.

[-] richieadler@lemmy.myserv.one 0 points 6 months ago* (last edited 6 months ago)

It's a really bad term because it's usually associated with a mind, and LLMs are nothing of the sort.

[-] Knock_Knock_Lemmy_In@lemmy.world 0 points 6 months ago

Anthropomorphication is hard to avoid in AI.

[-] richieadler@lemmy.myserv.one 1 points 6 months ago

Many worthy things are difficult.

[-] Knock_Knock_Lemmy_In@lemmy.world 1 points 6 months ago

But is anthropomorphism of AI particularly worrying?

[-] richieadler@lemmy.myserv.one 1 points 5 months ago

It is when the people tends to give more credence to entities that appear sentient and to have agency.

[-] TheBlackLounge@lemm.ee -1 points 6 months ago

So is bullshitting. More so, only human minds can bullshit.

We anthropomorphize machines all the time, it's fine.

I'd prefer we'd start calling all genai output hallucinations again. It used to be like 10 years ago, but somewhere along the line marketing decided hallucinated truths aren't "hallucinations".

[-] FutileRecipe@lemmy.world 1 points 6 months ago

So is bullshitting. More so, only human minds can bullshit.

And a bull's anus.

[-] richieadler@lemmy.myserv.one 1 points 6 months ago

We anthropomorphize machines all the time, it's fine.

It's fucking not, amd I'm not changing my mind about it.

[-] Delphia@lemm.ee 0 points 6 months ago

I want an AI/LLM that has been trained exclusively on the technical documentation and a haynes manual for a make and model of car.

"Hey AI, how do I change the fuel filter and what tools will I need?"

[-] TheKMAP@lemmynsfw.com 0 points 6 months ago

If you have the PDFs of that, you can build it with two clicks in GCP

[-] Delphia@lemm.ee 1 points 6 months ago

Manufacturers and dealers dont tend to make service bulletins and the high level stuff available to the consumer unfortunately.

[-] afraid_of_zombies@lemmy.world -1 points 6 months ago

You can sorta get that now if you play with it. I was building a driver a few months back and gave it the PDFs involved.

[-] atrielienz@lemmy.world 0 points 6 months ago

I don't even think hallucinations is the right word for this. It's got a source. It is giving you information from that source. The problem is it's treating the words at that source as completely factual despite the fact that they are not. Hallucinations from what I've read actually is more like when it queries it's data set, can't find an answer, and then generates nonsense in order to provide an answer it doesn't have. Don't think that's the same thing.

[-] balder1991@lemmy.world 3 points 6 months ago* (last edited 6 months ago)

I don’t even think it’s correct to say it’s querying anything, in the sense of a database. An LLM predicts the next token with no regard for the truth (there’s no sense of factual truth during training to penalize it, since that’s a very hard thing to measure).

Keep in mind that the same characteristic that allows it to learn the language also allows it to sort of come up with facts, it’s just a statistical distribution based on the whole context, which needs a bit randomness so it can be “creative.” So the ability to come up with facts isn’t something LLMs were designed to do, it’s just something we noticed that happens as it learns the language.

So it learned from a specific dataset, but the measure of whether it will learn any information depends on how well represented it is in that dataset. Information that appears repeatedly in the web is quite easy for it to answer as it was reinforced during training. Information that doesn’t show up much is just not gonna be learned consistently.[1]

[1] https://youtu.be/dDUC-LqVrPU

[-] atrielienz@lemmy.world 1 points 6 months ago

I understand the gist but I don't mean that it's actively like looking up facts. I mean that it is using bad information to give a result (as in the information it was trained on says 1+1 =5 and so it is giving that result because that's what the training data had as a result. The hallucinations as they are called by the people studying them aren't that. They are when the training data doesn't have an answer for 1+1 so then the LLM can't do math to say that the next likely word is 2. So it doesn't have a result at all but it is programmed to give a result so it gives nonsense.

[-] balder1991@lemmy.world 2 points 6 months ago* (last edited 6 months ago)

Yeah, I think the problem is really that language is ambiguous and the LLMs can get confused about certain features of it.

For example, I often ask different models when was the Go programming language created just to compare them. Some say 2007 most of the time and some say 2009 — which isn’t all that wrong, as 2009 is when it was officially announced.

This gives me a hint that LLMs can mix up things that are “close enough” to the concept we’re looking for.

this post was submitted on 25 May 2024
102 points (97.2% liked)

Technology

59674 readers
1900 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS