1044

Google has plunged the internet into a “spiral of decline”, the co-founder of the company’s artificial intelligence (AI) lab has claimed.

Mustafa Suleyman, the British entrepreneur who co-founded DeepMind, said: “The business model that Google had broke the internet.”

He said search results had become plagued with “clickbait” to keep people “addicted and absorbed on the page as long as possible”.

Information online is “buried at the bottom of a lot of verbiage and guff”, Mr Suleyman argued, so websites can “sell more adverts”, fuelled by Google’s technology.

you are viewing a single comment's thread
view the rest of the comments
[-] lloram239@feddit.de 3 points 1 year ago* (last edited 1 year ago)

because they give more accurate information, that simply is not true.

From my experience with BingChat, it's completely true. BingChat will search with Bing and summarize the results, providing sources and all. And the results are complete garbage most of the time, since search results are filled with garbage.

Meanwhile if you ask ChatGPT, which doesn't have Internet access, you get a far more sophisticated answer and correct answer. You can also answer follow up questions.

Web search is an absolutely terrible place for accurate information. ChatGPT in contrast consumes all the information out there, which makes it much harder for incorrect information to slip in, as information needs to be replicated frequently to stick around. It can and often is still wrong of course, but it is far better than any single website you'll find.

And of course all of this is still very early days for LLMs. GPT was never build with correctness in mind, it was build to autocomplete text, everything else was patchwork after the fact. The future of search is AI, no doubt about that.

[-] sndrtj@feddit.nl 13 points 1 year ago

Chatgpt flat out hallucinates quite frequently in my experience. It never says "I don't know / that is impossible / no one knows" to queries that simply don't have an answer. Instead, it opts to give a plausible-sounding but completely made-up answer.

A good AI system wouldn't do this. It would be honest, and give no results when the information simply doesn't exist. However, that is quite hard to do for LLMs as they are essentially glorified next-word predictors. The cost metric isn't on accuracy of information, it's on plausible-sounding conversation.

[-] pascal@lemm.ee 3 points 1 year ago

Ask chatgpt "tell me the biography of the famous painter sndrtj" to see how good the bot is at hallucinating an incredible realistic story that never happened.

[-] Takumidesh@lemmy.world 4 points 1 year ago
[-] pascal@lemm.ee 2 points 1 year ago

Oh, they fixed that! But I see you're using v4.

[-] CarlsIII@kbin.social 1 points 1 year ago

You don’t even have to make stuff up to get it to hallucinate. I once asked chat gpt who the original bass player was for Metallica was, and it repeatedly gave me the wrong answer, and even at one point said “Dave Ellefson.”

this post was submitted on 15 Oct 2023
1044 points (97.1% liked)

Technology

59708 readers
1562 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS