399
submitted 1 year ago by L4s@lemmy.world to c/technology@lemmy.world

ChatGPT generates cancer treatment plans that are full of errors — Study finds that ChatGPT provided false information when asked to design cancer treatment plans::Researchers at Brigham and Women's Hospital found that cancer treatment plans generated by OpenAI's revolutionary chatbot were full of errors.

you are viewing a single comment's thread
view the rest of the comments
[-] NigelFrobisher@aussie.zone 16 points 1 year ago

People really need to understand what LLMs are, and also what they are not. None of the messianic hype or even use of the term “AI” helps with this, and most of the ridiculous claims made in the space make me expect Peter Molyneux to be involved somehow.

[-] dx1@lemmy.world 4 points 1 year ago* (last edited 1 year ago)

LLMs fit in the "weak AI" category. I'd be inclined to not call them "AI" at all, since there is no intelligence, just the illusion of intelligence (if I could just redefine the term "AI"). It's possible to build intelligent AI, but probabilistic text construction isn't even close.

[-] fsmacolyte@lemmy.world 5 points 1 year ago

It's possible to build intelligent AI

What does intelligent AI that we can currently build look like?

[-] dx1@lemmy.world 2 points 1 year ago* (last edited 1 year ago)

There's "can build" and "have built". The basic idea is about continuously aggregating data and performing pattern analysis and basically cognitive schema assimilation/accommodation in the same way humans do. It's absolutely doable, at least I think so.

[-] fsmacolyte@lemmy.world 1 points 1 year ago

I haven't heard of cognitive schema assimilation. That sounds interesting. It sounds like it might fall prey to challenges we've had with symbolic AI in the past though.

[-] dx1@lemmy.world 1 points 1 year ago* (last edited 1 year ago)

It's a concept from psychology. Instead of just a model of linguistic construction, the model has to actually be a comprehensive, data-forged model of reality as far as human observation goes/we care about. In poorly tuned, low-information scenarios, it would fall mostly into the same traps human do (e.g. falling for propaganda or pseudoscientific theories) but, if finely tuned, should emulate accurate theories and even predictive results with an expansive enough domain.

this post was submitted on 26 Aug 2023
399 points (85.6% liked)

Technology

59623 readers
1375 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS