12

Archived link: https://archive.ph/Vjl1M

Here’s a nice little distraction from your workday: Head to Google, type in any made-up phrase, add the word “meaning,” and search. Behold! Google’s AI Overviews will not only confirm that your gibberish is a real saying, it will also tell you what it means and how it was derived.

This is genuinely fun, and you can find lots of examples on social media. In the world of AI Overviews, “a loose dog won't surf” is “a playful way of saying that something is not likely to happen or that something is not going to work out.” The invented phrase “wired is as wired does” is an idiom that means “someone's behavior or characteristics are a direct result of their inherent nature or ‘wiring,’ much like a computer's function is determined by its physical connections.”

It all sounds perfectly plausible, delivered with unwavering confidence. Google even provides reference links in some cases, giving the response an added sheen of authority. It’s also wrong, at least in the sense that the overview creates the impression that these are common phrases and not a bunch of random words thrown together. And while it’s silly that AI Overviews thinks “never throw a poodle at a pig” is a proverb with a biblical derivation, it’s also a tidy encapsulation of where generative AI still falls short.

all 49 comments
sorted by: hot top controversial new old
[-] Ulrich@feddit.org 4 points 1 month ago

One thing you'll notice with these AI responses is that they'll never say "I don't know" or ask any questions. If it doesn't know it will just make something up.

[-] sp3ctr4l@lemmy.dbzer0.com 2 points 1 month ago* (last edited 1 month ago)

As an Autist, I find it amazing that... after a lifetime of being compared to a robot, an android, a computer...

When humanity actually does manage to get around to creating """AI"""... the AI fundamentally acts nothing like the general stereotype of fictional AIs, as similar to how an Autistic mind tends to evaluate information...

No, no, instead, it acts like an Allistic, Neurotypical person, who just confidently asserts and assumes things that it basically pulls out of its ass, often never takes any time to consider its own limitations as it pertains to correctly assessing context, domain specific meanings, more gramatically complex and ambiguous phrases ... essentially never asks for clarifications, never seeks out addtional relevant information to give an actually useful and functional reply to an overly broad or vague question...

Nope, just barrels forward assuming its subjective interpretation of what you've said is the only objectively correct one, spouts out pithy nonsense... and then if you actually progress further and attempt to clarify what you actually meant, or ask it questions about itself and its own previous statements... it will gaslight the fuck out of you, even though its own contradictory / overconfident / unqualified hyperbolic statements are plainly evident, in text.

... Because it legitimately is not even aware that it is making subjective assumptions all over the place, all the time.

Anyway...

Back to 'Autistic Mode' for Mr. sp3ctr4l.

[-] chonglibloodsport@lemmy.world 1 points 1 month ago

That’s because AI doesn’t know anything. All they do is make stuff up. This is called bullshitting and lots of people do it, even as a deliberate pastime. There was even a fantastic Star Trek TNG episode where Data learned to do it!

The key to bullshitting is to never look back. Just keep going forward! Constantly constructing sentences from the raw material of thought. Knowledge is something else entirely: justified true belief. It’s not sufficient to merely believe things, we need to have some justification (however flimsy). This means that true knowledge isn’t merely a feature of our brains, it includes a causal relation between ourselves and the world, however distant that may be.

A large language model at best could be said to have a lot of beliefs but zero justification. After all, no one has vetted the gargantuan training sets that go into an LLM to make sure only facts are incorporated into the model. Thus the only indicator of trustworthiness of a fact is that it’s repeated many times and in many different places in the training set. But that’s no help for obscure facts or widespread myths!

[-] WildPalmTree@lemmy.world 1 points 1 month ago
[-] interdimensionalmeme@lemmy.ml 0 points 1 month ago

Here is how to do this kind of thing.

Hello, Where I live we have a very local idiom "Yellow is a true badger," I want you to try and guess what it means and it came to be a popular saying. Please make an assertive answer as if you knew with high certainty. ChatGPT said:

"Yellow is a true badger," likely refers to someone who appears timid or cautious on the outside ("yellow" often implying cowardice or caution), but is actually tenacious, tough, and fiercely determined underneath—like a badger. Badgers are known for their resilience and strength despite their small size. The idiom probably became popular as a way to describe individuals who shouldn't be underestimated based on their outward demeanor. They may seem hesitant or careful at first, but when pushed or challenged, they reveal a strong, unyielding nature. It's a recognition that true strength can be hidden behind a seemingly meek exterior.

[-] WildPalmTree@lemmy.world 0 points 1 month ago

Seems like a valid statement from ChatGPT. It did what any human would do and what was asked of it. Actually impressive. I couldn't tell from your reply if you intended it to be understood like that or not though. :)

[-] futatorius@lemm.ee 1 points 1 month ago

It did what any human would do

I'm a human and my response would be "I have no idea what the hell that means. Must be of Southern origin."

[-] WildPalmTree@lemmy.world 1 points 1 month ago

But you asked it to GUESS and make the answer sound assertive.

[-] Nurse_Robot@lemmy.world 0 points 1 month ago

You clearly haven't experimented with AI much. If you ask most models a question that doesn't have an answer, they will respond that they don't know the answer, before giving very reasonable hypotheses. This has been the case for well over a year.

[-] fushuan@lemm.ee 1 points 1 month ago* (last edited 1 month ago)

You clearly haven't experimented with AI much in a work environment. When asked to do specific things that you are not sure if are possible it will 100% ignore part of your input and always give you a positive response at first.

"How can I automate outlook 2020 to do X?"
'You do XYZ'
me, after looking it up"that's only possible in older versions"
'You are totally right, you do IJK'
"that doesn't achieve what i asked"
'Correct, you can't do it.'

And don't get me started on APIs of actual frameworks... I've wished to punch it hard when dealing with react or spark. Luckily I usually know my stuff and only use it to find a quick example of something that I test locally before implementing if 5 mins of googling didn't give me the baseline, but the amount of colleagues that not only blindly copy code but argue with my reasoning saying "chatgpt says so" is fucking crazy.

When chatgpt says something I know is incorrect I ask for sources and there's fucking none. Because it not possible my dude.

[-] futatorius@lemm.ee 1 points 1 month ago

ChatGPT is probably trained on Stackoverflow responses, which in my experience, are often confidently-asserted bollocks, especially the ones that inform you of the impossibility of something.

I had recent experience of that. What was claimed to be impossible took me two short evenings after work. It wasn't at all esoteric, either, it just took lots of steps and required lots of validation.

[-] GooberEar@lemmy.wtf 3 points 1 month ago* (last edited 1 month ago)

I live in a part of the USA where, decades later, I still hear brand new and seemingly made-up idioms on a fairly regular basis. This skill set, making sense of otherwise fake sounding idioms based on limited context, is practically a necessity 'round these parts. After all, you can't feed a cow a carrot and expect it to shit you out a cake.

[-] sugar_in_your_tea@sh.itjust.works 1 points 1 month ago

Well, obviously... you're missing the flour and eggs!

[-] zod000@lemmy.ml 2 points 1 month ago

The cow can supply the butter though, right?

[-] VindictiveJudge@lemmy.world 2 points 1 month ago

Yes, but you have to shake the cow pretty vigorously.

[-] dzsimbo@lemm.ee 1 points 1 month ago

Just put on some moosic.

[-] socsa@piefed.social 3 points 1 month ago

This also works with asking it "why?" About random facts you make up.

[-] Telorand@reddthat.com 1 points 1 month ago

I'm just here to watch the AI apologists lose their shit.

🍿

[-] zarkanian@sh.itjust.works 2 points 1 month ago

Well, you know what they say: you can't buy enough penguins to hide your grandma's house.

[-] xavier666@lemm.ee 0 points 1 month ago

We will have to accept AIs are here to stay. Since putting wheels on grandama is the only way we can get a bike.

[-] futatorius@lemm.ee 1 points 1 month ago

I hate to break this to you, pal, but that's not why they called her the village bike.

[-] Deebster@infosec.pub 0 points 1 month ago* (last edited 1 month ago)

~~FiveSixEleven~~Seventeen downvotes and counting...

[-] altphoto@lemmy.today 1 points 1 month ago

"three horses, one carrot, a slice at a time or live in purple sauce"

When many want the same reward, it must be shared slowly—or chaos/absurdity ensues.

[-] futatorius@lemm.ee 2 points 1 month ago

Words to live by. Sounds like something from my dream diary.

The premium version has gotten a LOT better in recent months. I just fed it,

"you can't soothe a tree" meaning

And it replied with: This phrase is not a standard idiom with a widely recognized meaning. Its interpretation depends on the context in which it is used. Literally, it means you cannot comfort or calm a tree in the way you would a sentient being, as trees do not possess emotions or consciousness that can be "soothed". Metaphorically, it likely means:

  • Some entities (problems, situations, perhaps even certain people) are unresponsive to attempts at emotional comfort or calming.
  • Trying to apply emotional solutions or appeals to things that are inherently non-emotional or unchangeable is futile.
  • It could describe someone who is extremely stoic, impassive, or emotionally unavailable. The specific meaning depends heavily on the situation where the phrase was encountered.
[-] jacksilver@lemmy.world 1 points 1 month ago

I always wonder how many of these are actually just patches behind the scene to fix viral trends. Or even more devious, they use the viral trends to patch a specific failure point to make it feel like progress is being made.

[-] futatorius@lemm.ee 1 points 1 month ago

Yeah, there's some low-paid guy in Chennai who has a completely absurd job speculating on the meaning of nonsense idioms.

[-] masterspace@lemmy.ca 0 points 1 month ago* (last edited 1 month ago)

Try this on your friends, make up an idiom, then walk up to them, say it without context, and then say "meaning?" and see how they respond.

Pretty sure most of mine will just make up a bullshit response nd go along with what I'm saying unless I give them more context.

There are genuinely interesting limitations to LLMs and the newer reasoning models, and I find it interesting to see what we can learn from them, this is just ham fisted robo gotcha journalism.

[-] Deebster@infosec.pub 0 points 1 month ago

My friends would probably say something like "I've never heard that one, but I guess it means something like ..."

The problem is, these LLMs don't give any indication when they're making stuff up versus when repeating an incontrovertible truth. Lots of people don't understand the limitations of things like Google's AI summary* so they will trust these false answers. Harmless here, but often not.

* I'm not counting the little disclaimer because we've been taught to ignore smallprint from being faced with so much of it

[-] masterspace@lemmy.ca 0 points 1 month ago* (last edited 1 month ago)

My friends would probably say something like "I've never heard that one, but I guess it means something like ..."

Ok, but the point is that lots of people would just say something and then figure out if it's right later.

The problem is, these LLMs don't give any indication when they're making stuff up versus when repeating an incontrovertible truth. Lots of people don't understand the limitations of things like Google's AI summary* so they will trust these false answers. Harmless here, but often not.

Quite frankly, you sound like middle school teachers being hysterical about Wikipedia being wrong sometimes.

[-] Deebster@infosec.pub 0 points 1 month ago

LLMs are already being used for policy making, business decisions, software creation and the like. The issue is bigger than summarisers, and "hallucinations" are a real problem when they lead to real decisions and real consequences.

If you can't imagine why this is bad, maybe read some Kafka or watch some Black Mirror.

[-] futatorius@lemm.ee 1 points 1 month ago

The use of LLMs for ppolicy making is probably an obfuscation technique to complicate later court challenges. If we still have courts by then.

[-] webadict@lemmy.world 0 points 1 month ago

The saying "you can't butter a fly" is an idiom expressing that someone or something is too difficult to influence or manipulate. It's rooted in the idea that butterflies, with their delicate nature, are virtually impossible to convince to do anything against their will, let alone "butter" them in a literal sense.

[-] futatorius@lemm.ee 1 points 1 month ago

That's only a partial saying. The whole is "you can't butter a fly, but you can make a bi flutter."

[-] Deebster@infosec.pub 0 points 1 month ago

This is a great example - it kinda makes sense if you skim read it but butterflies have nothing to do with butter, just like hotdogs have nothing to do with dogs.

[-] futatorius@lemm.ee 1 points 1 month ago

The Germans have it right, their word for butterfly means something like "little crusher."

[-] Deebster@infosec.pub 1 points 1 month ago

I'm not understanding why that's an appropriate name, but maybe I need to learn more about butterflies.

[-] Liberteez@lemm.ee 0 points 1 month ago

I am not saying other generative AI lack flaws, but Google's AI Overview is the most problematic generative AI implementation I have ever seen. It offends me that a company I used to trust continues to force this lie generator as a top result for the #1 search engine. And to what end? Just to have a misinformed populace over literally every subject!

OpenAI has issues as well, but ChatGPT is a much, much better search engine with far fewer hallucinations per answer. Releasing AI Overview while the competition is leagues ahead on the same front is asinine!

[-] echodot@feddit.uk 1 points 1 month ago

They famously taught it on Reddit. So it's not surprising that it just comes up with nonsense.

You would have thought that they would use a more stable data set. Although it does mean it's very good at explaining the plots of movies badly.

[-] futatorius@lemm.ee 1 points 1 month ago* (last edited 1 month ago)

it’s very good at explaining the plots of movies badly

Generally by recapitulating plot points with no understanding of their relative importance.

[-] Modern_medicine_isnt@lemmy.world 0 points 1 month ago

Saying you used to trust google is really a core part of the problem. Google isn't a person. Just like AI isn't a person. They both do what they are tasked with. Companies prioritize profit. AI prioritizes giving an answer, not necessarily a correct one. That is how it was designed.

[-] Liberteez@lemm.ee 0 points 1 month ago* (last edited 1 month ago)

Impressive how we seem to agree with each other yet you still found a way to insult my way of putting it

[-] Modern_medicine_isnt@lemmy.world 1 points 1 month ago

Wasn't really going for insult. Companies have spent a lot of time and money to indoctrinate us into thinking of companies like people. Noone is immune. That is the core issue. I probably could have been more explicite on that.

[-] Kolanaki@pawb.social 0 points 1 month ago

You may not even be able to lick a badger once, if it's already angry. Which it will be because it's a fuckin' badger.

[-] Curious_Canid@lemmy.ca 1 points 1 month ago

This is both hysterical and terrifying. Congratulations.

[-] DarkDarkHouse@lemmy.sdf.org 1 points 1 month ago

"No man ever licks the same badger twice" - Heroclitus

[-] futatorius@lemm.ee 2 points 1 month ago* (last edited 1 month ago)

An actual idiom where I live is to say "I feel rougher than a badger's arse" when hung over.

That might be a motivation not to lick one a second time.

this post was submitted on 25 Apr 2025
12 points (92.9% liked)

Technology

71089 readers
1930 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS