188

The research from Purdue University, first spotted by news outlet Futurism, was presented earlier this month at the Computer-Human Interaction Conference in Hawaii and looked at 517 programming questions on Stack Overflow that were then fed to ChatGPT.

“Our analysis shows that 52% of ChatGPT answers contain incorrect information and 77% are verbose,” the new study explained. “Nonetheless, our user study participants still preferred ChatGPT answers 35% of the time due to their comprehensiveness and well-articulated language style.”

Disturbingly, programmers in the study didn’t always catch the mistakes being produced by the AI chatbot.

“However, they also overlooked the misinformation in the ChatGPT answers 39% of the time,” according to the study. “This implies the need to counter misinformation in ChatGPT answers to programming questions and raise awareness of the risks associated with seemingly correct answers.”

you are viewing a single comment's thread
view the rest of the comments
[-] zelifcam@lemmy.world 12 points 6 months ago* (last edited 6 months ago)

“Major new Technology still in Infancy Needs Improvements”

-- headline every fucking day

[-] lauha@lemmy.one 15 points 6 months ago

"Corporation using immature technology in productions because it's cool"

More news at eleven

[-] capital@lemmy.world 1 points 6 months ago

This is scary because up to now, all software released worked exactly as intended so we need to be extra special careful here.

[-] otp@sh.itjust.works 1 points 6 months ago

Yes, and we never have and never will put lives in the hands of software developers before!

Tap for spoiler/s...for this comment and the above one, for anyone who needs it

[-] jmcs@discuss.tchncs.de 8 points 6 months ago

unready technology that spews dangerous misinformation in the most convincing way possible is being massively promoted

[-] AIhasUse@lemmy.world -5 points 6 months ago

Yeah, because no human would convincingly lie on the internet. Right, Arthur?

It's literally built on what confidently incorrect people put on the internet. The only difference is that there are constant disclaimers on it saying it may give incorrect information.

Anyone too stupid to understand how to use it is too stupid to use the internet safely anyways. Or even books for that matter.

[-] jmcs@discuss.tchncs.de 4 points 6 months ago

Holy mother of false equivalence. Google is not supposed to be a random dude on the Internet, it's supposed to be a reference tool, and for the most part it was a good one before they started enshittifying it.

[-] AIhasUse@lemmy.world -2 points 6 months ago

Google is a search engine. It points you to web pages that are made by people. Many times, the people who make those websites have put things on them that are knowingly or unknowingly incorrect but said in an authoritative manner. That was all I was saying, nothing controversial. That's been a known fact for a long time. You can't just read something on a single site and then be sure that it has to be true. I get that there are people who strangely fall in love with specific websites and think they are absolute truth, but thats always been a foolish way to use the internet.

A great example of people believing blindly is all these horribly doctored google ai images saying ridiculous things. There are so many idiots that think every time they see a screenshot of Google ai saying something absurd that it has to be true. People have even gone so far as to use ridiculous fonts just to point out how easy it is to get people to trust anything. Now there's a bunch of idiots that think all 20 or so Google ai mistakes they've seen are all genuine, so much so that they think almost all Google ai responses are incorrect. Some people are very stupid. Sorry to break it to you, but LLMs are not the first thing to put incorrect information on the internet.

[-] chaosCruiser@futurology.today 2 points 6 months ago* (last edited 6 months ago)

The way I see it, we’re finally sliding down the trough of disillusionment.

[-] AIhasUse@lemmy.world 0 points 6 months ago

I'm honestly a bit jealous of you. You are going to be so amazed when you realise this stuff is just barely getting started. It's insane what people are already building with agents. Once this stuff gets mainstream, and specialized hardware hits the market, our current paradigm is going to seem like silent black and white films compared to what will be going on. By 2030 we will feel like 2020 was half a century ago at least.

[-] chaosCruiser@futurology.today 0 points 6 months ago

Looking forward to it, but won’t be disappointed if it takes a bit longer than expected.

[-] AIhasUse@lemmy.world -1 points 6 months ago

Ray Kurzweil has a phenomenal record of making predictions. He's like 90% or something and has been saying AGI by 2029 for something like 30+ years. Last I heard, he is sticking with it, but he admits he may be a year or two off in either direction. AGI is a pretty broad term, but if you take it as "better than nearly every human in every field of expertise," then I think 2029 is quite reasonable.

[-] chaosCruiser@futurology.today 0 points 6 months ago

That’s not very far in the future, so it’s going to be really exciting to see how that works out.

[-] explodicle@sh.itjust.works 0 points 6 months ago

Maybe only 51% of the code it writes needs to be good before it can self-improve. In which case, we're nearly there!

[-] AIhasUse@lemmy.world -1 points 6 months ago

We are already past that. The 48% is from a version of chatgpt(3.5) that came out a year ago, there has been lots of progress since then.

[-] TropicalDingdong@lemmy.world -1 points 6 months ago* (last edited 6 months ago)

"Will this technology save us from ourselves, or are we just jerking off?"

[-] SnotFlickerman@lemmy.blahaj.zone -1 points 6 months ago

in Infancy Needs Improvements

I'm just gonna go out on a limb and say that if we have to invest in new energy sources just to make these tools functionably usable... maybe we're better off just paying people to do these jobs instead of burning the planet to a rocky dead husk to achieve AI?

[-] Thekingoflorda@lemmy.world 0 points 6 months ago

Just playing devil’s advocate here, but if we could get to a future with algorithms so good they are essentially a talking version of all human knowledge, this would be a great thing for humanity.

[-] SnotFlickerman@lemmy.blahaj.zone 1 points 6 months ago* (last edited 6 months ago)

this would be a great thing for humanity.

That's easy to say. Tell me how. Also tell me how to do it without it being biased about certain subjects over others. Captain Beatty would wildly disagree with this even being possible. His whole shtick in Fahrenheit 451 is that all the books disagreed with one another, so that's why they started burning them.

this post was submitted on 25 May 2024
188 points (97.5% liked)

Technology

59648 readers
1958 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS