188

The research from Purdue University, first spotted by news outlet Futurism, was presented earlier this month at the Computer-Human Interaction Conference in Hawaii and looked at 517 programming questions on Stack Overflow that were then fed to ChatGPT.

“Our analysis shows that 52% of ChatGPT answers contain incorrect information and 77% are verbose,” the new study explained. “Nonetheless, our user study participants still preferred ChatGPT answers 35% of the time due to their comprehensiveness and well-articulated language style.”

Disturbingly, programmers in the study didn’t always catch the mistakes being produced by the AI chatbot.

“However, they also overlooked the misinformation in the ChatGPT answers 39% of the time,” according to the study. “This implies the need to counter misinformation in ChatGPT answers to programming questions and raise awareness of the risks associated with seemingly correct answers.”

(page 3) 50 comments
sorted by: hot top controversial new old
[-] foremanguy92_@lemmy.ml 0 points 6 months ago

We have to wait a bit to have an useful assistant (but maybe something like copilot or more coded focused ai are better)

[-] tranxuanthang@lemm.ee 0 points 6 months ago* (last edited 6 months ago)

If you don't know what you are doing, and you give it a vague request hoping it will automatically solve your problem, then you will just have to spend even more time to debug its given code.

However, if you know exactly what needs do do, and give it a good prompt, then it will reward you with a very well written code, clean implementation and comments. Consider it an intern or junior developer.

Example of bad prompt: My code won't work [paste the code], I keep having this error [paste the error log], please help me

Example of (reasonably) good prompt: This code introduces deep recursion and can sometimes cause a "maximum stack size exceeded" error in certain cases. Please help me convert it to use a while loop instead.

load more comments (8 replies)
[-] Furbag@lemmy.world 0 points 6 months ago* (last edited 6 months ago)

People down vote me when I point this out in response to "AI will take our jobs" doomerism.

load more comments (9 replies)
[-] SnotFlickerman@lemmy.blahaj.zone 0 points 6 months ago* (last edited 6 months ago)

So this issue for me is this:

If these technologies still require large amounts of human intervention to make them usable then why are we expending so much energy on solutions that still require human intervention to make them usable?

Why not skip the burning the planet to a crisp for half-formed technology that can't give consistent results and instead just pay people a living fucking wage to do the job in the first place?

Seriously, one of the biggest jokes in computer science is that debugging other people's code gives you worse headaches than migraines.

So now we're supposed to dump insane amounts of money and energy (as in burning fossil fuels and needing so much energy they're pushing for a nuclear resurgence) into a tool that results in... having to debug other people's code?

They've literally turned all of programming into the worst aspect of programming for barely any fucking improvement over just letting humans do it.

Why do we think it's important to burn the planet to a crisp in pursuit of this when humans can already fucking make art and code? Especially when we still need humans to fix the fucking AIs work to make it functionally usable. That's still a lot of fucking work expected of humans for a "tool" that's demanding more energy sources than currently exists.

[-] AIhasUse@lemmy.world -1 points 6 months ago

There is a good chance that it is instrumental in discoveries that lead to efficient clean energy. It's not as if we were at some super clean, unabused planet before language models came along. We have needed help for quite some time. Almost nobody wants to change their own habits(meat, cars, planes, constant AC and heat...), so we need something. Maybe AI will help in this endevour like it has at so many other things.

load more comments (19 replies)
load more comments (8 replies)
[-] Voytrekk@lemmy.world 0 points 6 months ago

Just like answers on the Internet, you have to read the output and not just paste it blindly. I find the answers are usually useful, even if they aren't completely accurate. Figuring out the last bit is why we are paid as programmers.

load more comments (1 replies)
[-] OpenStars@discuss.online -1 points 6 months ago

So it is incorrect and verbose, but also comprehensive and using a well-articulated language style at the same time?

Also "study participants still preferred ChatGPT answers 35% of the time", meaning that the overwhelming majority (two-thirds) did not prefer the bot answers over the human(e), correct ones, that maybe were not phrased as confidently as they could have been.

Just say it out loud: ChatGPT is style over substance, aka Fox News. 🦊

[-] gravitas_deficiency@sh.itjust.works -1 points 6 months ago

C-suites:

tHis iS inCReDibLe! wE cAn SavE sO MUcH oN sTafFiNg cOStS!

load more comments
view more: ‹ prev next ›
this post was submitted on 25 May 2024
188 points (97.5% liked)

Technology

59648 readers
1464 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS