26
Google is losing it (lemmy.world)
submitted 5 months ago by kokesh@lemmy.world to c/technology@lemmy.world
top 50 comments
sorted by: hot top controversial new old
[-] RGB3x3@lemmy.world 5 points 5 months ago* (last edited 5 months ago)

Google has been bad for a long time, but they've shut the bed so hard lately. Seriously, look at this:

I actually run out of screenshot space before I can get to an actual regular search result!

[-] ILikeBoobies@lemmy.ca 1 points 5 months ago* (last edited 5 months ago)

Bing has a similar problem where it just repeats the results, some pages are only 1 result so you just keep clicking next

[-] Audacious@sh.itjust.works 0 points 4 months ago
load more comments (5 replies)
[-] beebarfbadger@lemmy.world 4 points 5 months ago

Google: "You wanted words? Here are some words!"

[-] kokesh@lemmy.world 1 points 5 months ago

Maybe they know something we don't know? What if: It will be a crime series following the "Fall guy" case, man who was a Boeing whistleblower and got sucked out of the fuselage mid flight. Was it the usual door falling off, or was it a murder? Maybe it is being filmed right now and Google leaked the information?

[-] M0oP0o@mander.xyz 4 points 5 months ago

We really need a whole community just for the very funny AI errors like this. I could spend all day reading about leaving a dog in a hot car, jumping off a bridge and eating at least one rock a day.

[-] silasmariner@programming.dev 1 points 5 months ago

And I'd be thrilled if that material were quarantined somewhere 😅

[-] M0oP0o@mander.xyz 0 points 5 months ago

Why? this is great content.

[-] Chip_Rat@lemmy.world 2 points 5 months ago

Great content isn't necessarily everyone's prefered content. Having it all in one place helps people who want to see it see it and people who don't don't have to. Win win.

[-] silasmariner@programming.dev 1 points 5 months ago

Oversaturation from the presentation format

load more comments (1 replies)
[-] isles@lemmy.world 3 points 5 months ago

I kinda like the new google. It's strong and wrong and doesn't afraid of anything.

[-] weew@lemmy.ca 2 points 5 months ago

Well, we know Google won't get rid of this.

They'll only cancel it after it actually works and becomes useful

[-] AFC1886VCC@reddthat.com 1 points 5 months ago

What the hell is going on with Google search? Has it completely shit itself after the AI implementation? I know its been bad for a while but this is another level.

[-] lemmyvore@feddit.nl 2 points 5 months ago

Short answer, yes. The ratio of LLM generated noise to actual content is increasing exponentially as we speak. To us it seems overnight because the increase is so steep but it's been happening for several years. And it's going to get a lot worse.

Honestly, I think we'll have to go back to 90s methods like web rings and human curated link directories.

[-] PenisWenisGenius@lemmynsfw.com 1 points 5 months ago* (last edited 5 months ago)

I do random hobby tinkering and search results have become so useless that I'm having to read a lot more books. Everything takes longer this way.

[-] mojo_raisin@lemmy.world 1 points 5 months ago

It's time to return to human curated directories.

[-] jamyang@lemmy.world 1 points 5 months ago

Do you know if there are any active ones? Hopefully categorized according to genre, geography, language etc?

[-] mojo_raisin@lemmy.world 1 points 5 months ago

Nope, been thinking about what it would take to make one though.

[-] dependencyinjection@discuss.tchncs.de 1 points 5 months ago* (last edited 5 months ago)

I stopped using Google years ago. I started using Bing but had to stop that as it would divert me to MSN to sign in when clicking a link for a news article. Like a news article for The Independent or The Times or any other.

I then started using DuckDuckGo which is powered by Bing, but found it wasn’t great at many searches.

I now use Arc Search most of the time and click browse for me to get the information I want without the bullshit. Search is essentially dead due to greed.

[-] KingThrillgore@lemmy.ml 1 points 5 months ago

I spent most of today looking at places to rent in Denver and I come home to Google having killed it's fucking search engine. What the hell is going on

[-] jennwiththesea@lemmy.world 4 points 5 months ago

That's what you get for trying to have a real life.

[-] DragonOracleIX@lemmy.ml 1 points 5 months ago

Google decided that the entirety of reddit is perfect for training data in their AI LLM. People's shitposts from 10 years ago have now been given the spotlight at the top of google searches.

[-] fne8w2ah@lemmy.world 1 points 5 months ago

Did somebody say enshittification?

[-] SuddenDownpour@sh.itjust.works 1 points 5 months ago

AI and its consequences have been a disaster for Google.

[-] tedu@azorius.net 0 points 5 months ago

So weird, that's not what I see.

[-] voracitude@lemmy.world 1 points 5 months ago

On the one hand, generative AI doesn't have to give deterministic answers i.e. it won't necessarily generate the same answer even when asked the same question in the same way.

But on the other hand, editing the HTML of any page to say whatever you want and then taking a screenshot of it is very easy.

[-] QuadratureSurfer@lemmy.world 0 points 5 months ago

Technically, generative AI will always give the same answer when given the same input. But, what happens is a "seed" is mixed in to help randomize things, that way it can give different answers every time even if you ask it the same question.

[-] jyte@lemmy.world 0 points 5 months ago

What happened to my computers being reliable, predictable, idempotent ? :'(

[-] QuadratureSurfer@lemmy.world 0 points 5 months ago

They still are. Giving a generative AI the same input and the same seed results in the same output every time.

[-] jyte@lemmy.world 1 points 5 months ago

Technically they still are, but since you don't have a hand on the seed, practically they are not.

[-] QuadratureSurfer@lemmy.world 1 points 5 months ago

OK, but we're discussing whether computers are "reliable, predictable, idempotent". Statements like this about computers are generally made when discussing the internal workings of a computer among developers or at even lower levels among computer engineers and such.

This isn't something you would say at a higher level for end-users because there are any number of reasons why an application can spit out different outputs even when seemingly given the "same input".

And while I could point out that Llama.cpp is open source (so you could just go in and test this by forcing the same seed every time...) it doesn't matter because your statement effectively boils down to something like this:

"I clicked the button (input) for the random number generator and got a different number (output) every time, thus computers are not reliable or predictable!"

If you wanted to make a better argument about computers not always being reliable/predictable, you're better off pointing at how radiation can flip bits in our electronics (which is one reason why we have implemented checksums and other tools to verify that information hasn't been altered over time or in transition). Take, for instance, the example of what happened to some voting machines in Belgium in 2003: https://www.businessinsider.com/cosmic-rays-harm-computers-smartphones-2019-7

Anyway, thanks if you read this far, I enjoy discussing things like this.

[-] jyte@lemmy.world 1 points 5 months ago

You are taking all my words way too strictly as to what I intended :)

It was more along the line : Me, a computer user, up until now, I could (more or less) expect the tool (software/website) I use in a relative consistant maner (be it reproducing a crash following some actions). Doing the same thing twice would (mostly) get me the same result/behaviour. For instance, an Excel feature applied on a given data should behave the same next time I show it to a friend. Or I found a result on Google by typing a given query, I hopefully will find that website again easily enough with that same query (even though it might have ranked up or down a little).

It's not strictly "reliable, predictable, idempotent", but consistent enough that people (users) will say it is.

But with those tools (ie: chatGPT), you get an answer, but are unable to get back that initial answer with the same initial query, and it basically makes it impossible to get that same* output because you have no hand on the seed.

The random generator is a bit streached, you expect it to be different, it's by design. As a user, you expect the LLM to give you the correct answer, but it's actually never the same* answer.

*and here I mean same as "it might be worded differently, but the meaning is close to similar as previous answer". Just like if you ask a question twice to someone, he won't use the exact same wording, but will essentially says the same thing. Which is something those tools (or rather "end users services") do not give me. Which is what I wanted to point out in much fewer words :)

[-] otter@lemmy.ca 0 points 5 months ago

It could also be A/B testing, so not everyone will have the AI running in general

[-] credo@lemmy.world 0 points 5 months ago

It’s not A/B testing if they aren’t getting feedback.

[-] halcyoncmdr@lemmy.world 1 points 5 months ago

Google runs passive A/B testing all the time.

If you're using a Google service there's a 99% chance you're part of some sort of internal test of changes.

load more comments (2 replies)
[-] dutchkimble@lemy.lol 1 points 5 months ago

But the real question is, is the colour blue that you see, the colour blue that I see?

[-] RobotToaster@mander.xyz 0 points 5 months ago
[-] casmael@lemm.ee 1 points 5 months ago

Yup me too 🫡

[-] Nobody@lemmy.world 0 points 5 months ago* (last edited 5 months ago)
[-] AdamEatsAss@lemmy.world 1 points 5 months ago

Oh perfect. We'll just point production to your machine.

[-] Wanangwa_Bamidele@thelemmy.club 0 points 5 months ago

yo, did you modify the html page to make this meme ?

[-] ArcticAmphibian@lemmus.org 0 points 5 months ago

Nope. Google trained the model it's using for search results off of Reddit, etc. junk data and expected it to be coherent.

[-] webghost0101@sopuli.xyz 0 points 5 months ago

I wonder if they considered reddit votes to try to give more weight to high quality answers but also high quality jokes.

But without votes pure nonsense becomes equal to truth.

Humans could use reddit because we understand the site enough to be able to filter the valuable from the bad.

I feel like the answer would be in between ai specifically to be such a filter.

Every such post of google failing i have screen capped and then asked chatgpt for a more detailed explanation to do what google suggests i do. Everytime it managed to call out the issues. So just allowing an ai to proofread its response in context of the question could stop a lot of hallucinations.

But its at least 3 times as slow and expensive if it needs to change its first response.

But i guess doing things properly isnt profitable , better to just rush tech and kill your most famous product.

[-] echodot@feddit.uk 0 points 5 months ago

People upvote stupid stuff as well though. Because humans understand humor, irony, and satire.

The AI is like those people that need /s to be able to work it out. If it's missing they erroneously take everything as serious.

load more comments (2 replies)
load more comments
view more: next ›
this post was submitted on 24 May 2024
26 points (96.4% liked)

Technology

59381 readers
1004 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS