1093

You know how Google's new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won't slide off (pssst...please don't do this.)

Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these "hallucinations" are an "inherent feature" of  AI large language models (LLM), which is what drives AI Overviews, and this feature "is still an unsolved problem."

top 50 comments
sorted by: hot top controversial new old
[-] givesomefucks@lemmy.world 332 points 5 months ago

They keep saying it's impossible, when the truth is it's just expensive.

That's why they wont do it.

You could only train AI with good sources (scientific literature, not social media) and then pay experts to talk with the AI for long periods of time, giving feedback directly to the AI.

Essentially, if you want a smart AI you need to send it to college, not drop it off at the mall unsupervised for 22 years and hope for the best when you pick it back up.

[-] Excrubulent@slrpnk.net 153 points 5 months ago

No he's right that it's unsolved. Humans aren't great at reliably knowing truth from fiction too. If you've ever been in a highly active comment section you'll notice certain "hallucinations" developing, usually because someone came along and sounded confident and everyone just believed them.

We don't even know how to get full people to do this, so how does a fancy markov chain do it? It can't. I don't think you solve this problem without AGI, and that's something AI evangelists don't want to think about because then the conversation changes significantly. They're in this for the hype bubble, not the ethical implications.

[-] dustyData@lemmy.world 74 points 5 months ago

We do know. It's called critical thinking education. This is why we send people to college. Of course there are highly educated morons, but we are edging bets. This is why the dismantling or coopting of education is the first thing every single authoritarian does. It makes it easier to manipulate masses.

[-] Excrubulent@slrpnk.net 57 points 5 months ago

"Edging bets" sounds like a fun game, but I think you mean "hedging bets", in which case you're admitting we can't actually do this reliably with people.

And we certainly can't do that with an LLM, which doesn't actually think.

load more comments (8 replies)
load more comments (2 replies)
load more comments (4 replies)
[-] RootBeerGuy@discuss.tchncs.de 55 points 5 months ago

I let you in on a secret: scientific literature has its fair share of bullshit too. The issue is, it is much harder to figure out its bullshit. Unless its the most blatant horseshit you've scientifically ever seen. So while it absolutely makes sense to say, let's just train these on good sources, there is no source that is just that. Of course it is still better to do it like that than as they do it now.

[-] givesomefucks@lemmy.world 33 points 5 months ago

The issue is, it is much harder to figure out its bullshit.

Google AI suggested you put glue on your pizza because a troll said it on Reddit once...

Not all scientific literature is perfect. Which is one of the many factors that will stay make my plan expensive and time consuming.

You can't throw a toddler in a library and expect them to come out knowing everything in all the books.

AI needs that guided teaching too.

load more comments (8 replies)
load more comments (2 replies)
[-] Zarxrax@lemmy.world 43 points 5 months ago

I'm addition to the other comment, I'll add that just because you train the AI on good and correct sources of information, it still doesn't necessarily mean that it will give you a correct answer all the time. It's more likely, but not ensured.

load more comments (1 replies)
[-] Leate_Wonceslace@lemmy.dbzer0.com 30 points 5 months ago

it's just expensive

I'm a mathematician who's been following this stuff for about a decade or more. It's not just expensive. Generative neural networks cannot reliably evaluate truth values; it will take time to research how to improve AI in this respect. This is a known limitation of the technology. Closely controlling the training data would certainly make the information more accurate, but that won't stop it from hallucinating.

The real answer is that they shouldn't be trying to answer questions using an LLM, especially because they had a decent algorithm already.

load more comments (7 replies)
load more comments (26 replies)
[-] TacticsConsort@yiffit.net 221 points 5 months ago

In the interest of transparency, I don't know if this guy is telling the truth, but it feels very plausible.

load more comments (8 replies)
[-] Hubi@lemmy.world 141 points 5 months ago

The solution to the problem is to just pull the plug on the AI search bullshit until it is actually helpful.

[-] wewbull@feddit.uk 46 points 5 months ago

Absolutely this. Microsoft is going headlong into the AI abyss. Google should be the company that calls it out and says "No, we value the correctness of our search results too much".

It would obviously be a bullshit statement at this point after a decade of adverts corrupting their value, but that's what they should be about.

[-] jojo@lemmy.blahaj.zone 26 points 5 months ago

Don't count on it, the head of search does not care for anything but profit, it was the same guy who drove yahoo into the ground

load more comments (1 replies)
load more comments (3 replies)
[-] MNByChoice@midwest.social 131 points 5 months ago

Good. Nothing will get us through the hype cycle faster than obvious public failure. Then we can get on with productive uses.

[-] Tier1BuildABear@lemmy.world 41 points 5 months ago

I don't like the sound of getting on with "productive uses" either though. I hope the entire thing is a catastrophic failure.

load more comments (16 replies)
[-] Resol@lemmy.world 94 points 5 months ago

If you can't fix it, then get rid of it, and don't bring it back until we reach a time when it's good enough to not cause egregious problems (which is never, so basically don't ever think about using your silly Gemini thing in your products ever again)

load more comments (4 replies)
[-] masquenox@lemmy.world 83 points 5 months ago

Since when has feeding us misinformation been a problem for capitalist parasites like Pichai?

Misinformation is literally the first line of defense for them.

[-] Badeendje@lemmy.world 34 points 5 months ago

But this is not misinformation, it is uncontrolled nonsense. It directly devalues their offering of being able to provide you with an accurate answer to something you look for. And if their overall offering becomes less valuable, so does their ability to steer you using their results.

So while the incorrect nature is not a problem in itself for them, (as you see from his answer)… the degradation of their ability to influence results is.

load more comments (8 replies)
load more comments (14 replies)
[-] sentient_loom@sh.itjust.works 78 points 5 months ago

Here's a solution: don't make AI provide the results. Let humans answer each other's questions like in the good old days.

[-] bjoern_tantau@swg-empire.de 36 points 5 months ago

Whatever happened to Jeeves? He seemed like a good guy. He probably burned out.

[-] werefreeatlast@lemmy.world 26 points 5 months ago

You can find him walking Lycos around Geocities picking up it's poop in little green plastic bags.

load more comments (2 replies)
load more comments (1 replies)
load more comments (4 replies)
[-] SuddenDownpour@sh.itjust.works 76 points 5 months ago

Has No Solution for Its AI Providing Wildly Incorrect Information

Don't use it??????

AI has no means to check the heaps of garbage data is has been fed against reality, so even if someone were to somehow code one to be capable of deep, complex epistemological analysis (at which point it would already be something far different from what the media currently calls AI), as long as there's enough flat out wrong stuff in its data there's a growing chance of it screwing it up.

load more comments (1 replies)
[-] GenosseFlosse@lemmy.nz 74 points 5 months ago

Wow, in the 2000's and 2010's google my impression was that this is an amazing company where brilliant people work to solve big problems to make the world a better place. In the last 10 years, all I was hoping for was that they would just stop making their products (search, YouTube) worse.

Now they just blindly riding the AI hype train, because "everyone else is doing AI".

load more comments (3 replies)
[-] kwebb990@lemmy.world 68 points 5 months ago

and our parents told us Wikipedia couldn't be trusted....

load more comments (3 replies)
[-] Paradox@lemdro.id 66 points 5 months ago

Replace the CEO with an AI. They're both good at lying and telling people what they want to hear, until they get caught

load more comments (1 replies)
[-] xantoxis@lemmy.world 65 points 5 months ago

"It's broken in horrible, dangerous ways, and we're gonna keep doing it. Fuck you."

load more comments (2 replies)
[-] joe_archer@lemmy.world 63 points 5 months ago

It is probably the most telling demonstration of the terrible state of our current society, that one of the largest corporations on earth, which got where it is today by providing accurate information, is now happy to knowingly provide incorrect, and even dangerous information, in its own name, an not give a flying fuck about it.

load more comments (3 replies)
[-] namingthingsiseasy@programming.dev 61 points 5 months ago

The best part of all of this is that now Pichai is going to really feel the heat of all of his layoffs and other anti-worker policies. Google was once a respected company and place where people wanted to work. Now they're just some generic employer with no real lure to bring people in. It worked fine when all he had to do was increase the prices on all their current offerings and stuff more ads, but when it comes to actual product development, they are hopelessly adrift that it's pretty hilarious watching them flail.

You can really see that consulting background of his doing its work. It's actually kinda poetic because now he'll get a chance to see what actually happens to companies that do business with McKinsey.

load more comments (4 replies)
[-] badbytes@lemmy.world 59 points 5 months ago

Step 1. Replace CEO with AI. Step 2. Ask New AI CEO, how to fix. Step 3. Blindly enact and reinforce steps

load more comments (2 replies)
[-] Mad_Punda@feddit.de 51 points 5 months ago

these hallucinations are an "inherent feature" of  AI large language models (LLM), which is what drives AI Overviews, and this feature "is still an unsolved problem”.

Then what made you think it’s a good idea to include that in your product now?!

load more comments (8 replies)
[-] PumpkinEscobar@lemmy.world 51 points 5 months ago

Rip up the Reddit contract and don’t use that data to train the model. It’s the definition of a garbage in garbage out problem.

load more comments (7 replies)
[-] TheObviousSolution@lemm.ee 49 points 5 months ago

If you train your AI to sound right, your AI will excel at sounding right. The primary goal of LLMs is to sound right, not to be correct.

load more comments (2 replies)
[-] jet@hackertalks.com 47 points 5 months ago

Media needs to stop calling this AI. There is no intelligence here.

The content generator models know how to put probabilistic tokens together. It has no ability to reason.

It is a currently unsolvable problem to evaluate text to determine if it's factual..until we have artificial general intelligence.

AI will not be able to act like real AI until we solve real AI. That is the currently open problem.

load more comments (8 replies)
[-] mrfriki@lemmy.world 46 points 5 months ago* (last edited 5 months ago)

So if a car maker releases a car model that randomly turns abruptly to the left for no apparent reason, you simply say "I can't fix it, deal with it"? No, you pull it out of the market, try to fix it and, if this it is not possible, then you retire the model before it kills anyone.

[-] ma11en@lemmy.world 27 points 5 months ago

I bet if there weren't angencies forcing them to do this they wouldn't recall.

load more comments (2 replies)
[-] Fedditor385@lemmy.world 43 points 5 months ago

This is so wild to me... as a software engineer, if my software doesn't work 100% of the time as requested in the specification, it fails tests, doesn't get released and I get told to fix all issues before going live.

AI is basically another word for unrealiable software full of bugs.

load more comments (2 replies)
[-] Breve@pawb.social 40 points 5 months ago

Have they tried not using it? 🤦

load more comments (3 replies)
[-] DudeImMacGyver@sh.itjust.works 38 points 5 months ago

How about stop forcing it on us?

load more comments (5 replies)
[-] Tygr@lemmy.world 38 points 5 months ago

Google CEO essentially says the first result should not be trusted.

[-] johannesvanderwhales@lemmy.world 37 points 5 months ago

TBH this is surprisingly honest.

load more comments (47 replies)
[-] Ranger@lemmy.blahaj.zone 37 points 5 months ago

Maybe if you can't get it to be accurate you shouldn't be trying to insert it into everything.

[-] Toneswirly@lemmy.world 36 points 5 months ago

The answer is dont inflate your stock price by cramming the latest tech du jour in to your flagship product... but we all know thats not an option.

[-] SomeGuy69@lemmy.world 36 points 5 months ago

I mean they could disable it until it works, else it's knowingly misleading people

[-] go_go_gadget@lemmy.world 31 points 5 months ago

Obviously you don't have a business degree.

[-] retrospectology@lemmy.world 35 points 5 months ago* (last edited 5 months ago)

This is what happens every time society goes along with tech bro hype. They just run directly into a wall. They are the embodiment of "Didn't stop to think if they should" and it's going to cause a lot of problems for humanity.

load more comments (3 replies)
[-] AFC1886VCC@reddthat.com 34 points 5 months ago

I think we should stop calling things AI unless they actually have their own intelligence independent of human knowledge and training.

load more comments (10 replies)
[-] CrowAirbrush@lemmy.world 29 points 5 months ago

I have a solution: stop using their search engine to begin with and slowly replace everything else google you use.

load more comments (6 replies)
[-] BrokenGlepnir@lemmy.world 29 points 5 months ago

There is apparently no limit to calling a bug a feature

load more comments (1 replies)
[-] Tartas1995@discuss.tchncs.de 28 points 5 months ago

I know an easy fix. Just don't do ai.

[-] RizzRustbolt@lemmy.world 28 points 5 months ago

The model literally ate The Onion, and now they can't get it to throw it back up.

load more comments (1 replies)
[-] mp3@lemmy.ca 27 points 5 months ago* (last edited 5 months ago)

They polluted their model with the sewage of the Internet.

The only worse thing they could have done is base their entire LLM dataset on 4chan.

load more comments
view more: next ›
this post was submitted on 27 May 2024
1093 points (98.1% liked)

Technology

59144 readers
2917 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS