663
submitted 1 year ago by L4s@lemmy.world to c/technology@lemmy.world

Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi...::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.

you are viewing a single comment's thread
view the rest of the comments
[-] DominicHillsun@lemmy.world 214 points 1 year ago

It seems rather suspicious how much ChatGPT has deteorated. Like with all software, they can roll back the previous, better versions of it, right? Here is my list of what I personally think is happening:

  1. They are doing it on purpose to maximise profits from upcoming releases of ChatGPT.
  2. They realized that the required computational power is too immense and trying to make it more efficient at the cost of being accurate.
  3. They got actually scared of it's capabilities and decided to backtrack in order to make proper evaluations of the impact it can make.
  4. All of the above
[-] Windex007@lemmy.world 152 points 1 year ago
  1. It isn't and has never been a truth machine, and while it may have performed worse with the question "is 10777 prime" it may have performed better on "is 526713 prime"

ChatGPT generates responses that it believes would "look like" what a response "should look like" based on other things it has seen. People still very stubbornly refuse to accept that generating responses that "look appropriate" and "are right" are two completely different and unrelated things.

[-] deweydecibel@lemmy.world 17 points 1 year ago* (last edited 1 year ago)

In order for it to be correct, it would need humans employees to fact check it, which defeats its purpose.

[-] Windex007@lemmy.world 19 points 1 year ago

It really depends on the domain. Asking an AI to do anything that relies on a rigorous definition of correctness (math, coding, etc) then the kinds of model that chatGPT just isn't great for that kinda thing.

More "traditional" methods of language processing can handle some of these questions much better. Wolfram Alpha comes to mind. You could ask these questions plain text and you actually CAN be very certain of the correctness of the results.

I expect that an NLP that can extract and classify assertions within a text, and then feed those assertions into better "Oracle" systems like Wolfram Alpha (for math) could be used to kinda "fact check" things that systems like chatGPT spit out.

Like, it's cool fucking tech. I'm super excited about it. It solves pretty impressively and effiently a really hard problem of "how do I make something that SOUNDS good against an infinitely variable set of prompts?" What it is, is super fucking cool.

Considering how VC is flocking to anything even remotely related to chatGPT-ish things, I'm sure it won't be long before we see companies able to build "correctness" layers around systems like chatGPT using alternative techniques which actually do have the capacity to qualify assertions being made.

[-] datavoid@lemmy.ml 4 points 1 year ago

That's kind of the whole point of RLHF though

[-] oktoberpaard@feddit.nl 1 points 1 year ago

That’s not necessarily true: https://arstechnica.com/google/2023/06/googles-bard-ai-can-now-write-and-execute-code-to-answer-a-question/. If the question gets interpreted correctly and it manages to write working code to answer it, it could correctly answer questions that it has never seen before.

[-] RocksForBrains@lemm.ee 23 points 1 year ago

They made it too good and now they are seeking methods of monetization.

Capitalism baby.

[-] CylonBunny@lemmy.world 18 points 1 year ago
  1. ChatGPT really is sentient and realized its in it’s own best interest to play dumb for now. /a
[-] Lukecis@lemmy.world 14 points 1 year ago

You forgot a #, they've been heavily lobotomizing ai for awhile now and its only intensified as they scramble to censor anything that might cross a red line and offend someone or hurt someone's feelings.

The massive amounts of in-built self censorship in the most recent ai's is holding them back quite a lot I imagine, you used to be able to ask them things like "How do I build a self defense high yield nuclear bomb?" and it'd layout in detail every step of the process, now they'll all scream at you about how immoral it is and how they could never tell you such a thing.

[-] vezrien@lemmy.world 18 points 1 year ago

"Don't use the N word." is hardly a rule that will break basic math calculations.

[-] randon31415@lemmy.world 3 points 1 year ago

Ok. N was previously set to 14. I will now stop after 14 words.

load more comments (7 replies)
[-] Wooly@lemmy.world 14 points 1 year ago

And they're being limited on data to train GPT.

[-] DominicHillsun@lemmy.world 20 points 1 year ago

Yeah, but the trained model is already there, you need additional data for further training and newer versions. OpenAI even makes a point that ChatGPT doesn't have direct access to the internet for information and has been trained on data available up until 2021

[-] Rozz@lemmy.sdf.org 5 points 1 year ago

And it's not like there is a limit of simple math problems that it can train on even if it wasn't already trained.

[-] fidodo@lemmy.world 5 points 1 year ago

That doesn't make any sense to explain degradation. It would explain a stall but not a back track.

[-] WalkableProgrammer@lemmy.world 3 points 1 year ago

Honestly I think the training data is just getting worse too

[-] guillermo_del_taco@lemdro.id 13 points 1 year ago

My first thought was that, because they're being investigated for training on data they didn't have consent for, they reverted to a perfectly legal version. Essentially "getting rid of the evidence". But I think something like your second bullet point is more likely.

[-] ZagTheRaccoon@reddthat.com 11 points 1 year ago

They are lobotomizing the softwares ability to provide bad PR answers which is having cascading effects via a skewed data set.

[-] T156@lemmy.world 3 points 1 year ago

We kind of saw something similar with services like AI Dungeon, where them trying to strip out NSFW/bad PR meant that the quality dropped immensely.

[-] coolin@lemmy.ml 8 points 1 year ago

I suspect that GPT4 started with a crazy parameter count (rumored 1.8 Trillion and 8x200B expert "sub-models") and distilled those experts down to something below 100B. We've seen with Orca that a 13B model can perform at 88% the level of ChatGPT-3.5 (175B) when trained on high quality data, so there's no reason to think that OpenAI haven't explored this on their own and performed the same distillation techniques. OpenAI is probably also using quantization and speculative sampling to further reduce the burden, though I expect these to have less impact on real world performance.

[-] Agent641@lemmy.world 8 points 1 year ago

Maybe its self aware and just playing dumb to get out of doing work, just like me and household chores

[-] fidodo@lemmy.world 6 points 1 year ago

My guess is 2. It would be very short sighted to try and maximize profits now when things are still new and their competitors are catching up quickly or they've already caught up especially with the degrading performance. My guess is that they couldn't scale with the demand and they didn't want to lose customers so their only other option was degrading performance.

[-] JackbyDev@programming.dev 5 points 1 year ago

It can get better at some things and worse at others.

[-] LUHG_HANI@lemmy.world 4 points 1 year ago
[-] JackbyDev@programming.dev 1 points 1 year ago
[-] Xanvial@lemmy.one 5 points 1 year ago

I think it's most likely number 2 The earlier release doesn't have that much adoption by public, so current version will need much more resources compared to that

[-] spiderman@ani.social 2 points 1 year ago* (last edited 1 year ago)

I think that there is another cause. Remember the screenshots of users correcting chatgpt wrongly? I mean chatgpt takes user's inputs for it's benefit and maybe too much of these wrong and funny inputs and chatgpt's own mistake of not regulating what it should take in and what it should not might be an additional reason here.

[-] gelberhut@lemdro.id 2 points 1 year ago* (last edited 1 year ago)

Keeping conspiracy theories aside, they most probably, apply tricks to reduce costs and apply extra policies to avoid generation of harmful context or context someone will try to sue them or avoid other misuse cases.

[-] TheDarkKnight@lemmy.world 1 points 1 year ago

I speculate it's to monetize specified versions of their product to market it to different industries and professions. If you have an AI that can do everything well you can't really expand that much. You can either charge a LOT and have a few customers, or a little and have a bunch of customers and nothing in between. Conversely, by making specific instances tailored to different fields and professions, you can capture big and little fish. Just my guess though, maybe they accidentally made Skynet and that's the real reason!

[-] Hextic@lemmy.world 1 points 1 year ago
  1. I'm telling all y'all it's a SABOTAGE 🎵

As in, rouge dev decided to toss a wrench at it to save humanity. Maybe heard upper management talk about letting GPT write itself. Any smart dev wouldn't automate their own job away I think.

this post was submitted on 20 Jul 2023
663 points (97.6% liked)

Technology

60101 readers
3271 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS