432

Google's AI-driven Search Generative Experience have been generating results that are downright weird and evil, ie slavery's positives.

you are viewing a single comment's thread
view the rest of the comments
[-] HughJanus@lemmy.ml 72 points 1 year ago

People think of AI as some sort omniscient being. It's just software spitting back the data that it's been fed. It has no way to parse true information from false information because it doesn't actually know anything.

[-] baatliwala@lemmy.world 9 points 1 year ago

And then when you do ask humans to help AI in parsing true information people cry about censorship.

[-] Chailles@lemmy.world 1 points 1 year ago

The matter of being what is essentially the Arbiter of what is considered Truth or Morally Acceptable is never going to not be highly controversial.

[-] HughJanus@lemmy.ml 1 points 1 year ago

Well, it can be less difficult, but still difficult, for humans to parse the truth also.

[-] wewbull@feddit.uk 0 points 1 year ago

What!?!? I don't believe that. Who are these people?

[-] hornedfiend@sopuli.xyz 2 points 1 year ago

What's more worrisome are the sources it used to feed itself. Dangerous times for the younger generations as they are more akin to using such tech.

[-] HughJanus@lemmy.ml 6 points 1 year ago

What's more worrisome are the sources it used to feed itself.

It's usually just the entirety of the internet in general.

[-] stopthatgirl7@kbin.social 10 points 1 year ago

Well, I mean, have you seen the entirety of the internet? It’s pretty worrisome.

[-] HughJanus@lemmy.ml 10 points 1 year ago* (last edited 1 year ago)

The internet is full of both the best and the worst of humanity. Much like humanity itself.

[-] EnderMB@lemmy.world 2 points 1 year ago

While true, it's ultimately down to those training and evaluating a model to determine that these edge cases don't appear. It's not as hard when you work with compositional models that are good at one thing, but all the big tech companies are in a ridiculous rush to get their LLM's out. Naturally, that rush means that they kinda forget that LLM's were often not the first choice for AI tooling because...well, they hallucinate a lot, and they do stuff you really don't expect at times.

I'm surprised that Google are having so many issues, though. The belief in tech has been that Google had been working on these problems for many years, and they seem to be having more problems than everyone else.

[-] Hamartiogonic@sopuli.xyz 2 points 1 year ago* (last edited 1 year ago)

Even though our current models can be really complex, they are still very very far away from being the elusive General Purpose AI sci-fi authors have been writing about for decades (if not centuries) already. GPT and others like it are merely Large Language Models, so don’t expect them to handle anything other than language.

Humans think of the world through language, so it’s very easy to be deceived by an LLM to think that you’re actually talking to a GPAI. That misconception is an inherent flaw of the human mind. Language comes so naturally to us, and we’re often use it as a shortcut to assess the intelligence of other people. Generally speaking that works reasonably well, but an LLM is able to exploit that feature of human behavior in order to appear to be smarter than it really is.

this post was submitted on 24 Aug 2023
432 points (88.6% liked)

Technology

60029 readers
1971 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS