393
submitted 5 months ago by neme@lemm.ee to c/technology@lemmy.world
you are viewing a single comment's thread
view the rest of the comments
[-] ChairmanMeow@programming.dev 8 points 5 months ago

My point is that telling it a right answer is wrong often causes LLMs to completely shit the bed. They used to argue with you nonsensically, now they give you a different answer (often also wrong).

The only question missing at the start was "How many r's are there in the word 'veryberry'. I think raspberry also worked when I tried it. This was ChatGPT4-O. I did mark all the answers as bad, so perhaps they've fixed this one by now.

Still, it's remarkably trivial to get an LLM to provide a clearly non-human response.

[-] theherk@lemmy.world 1 points 5 months ago

Fair enough, but it does somewhat undercut your message that every model I’ve tested including quite old ones answer this question correctly on the first try. This image is ChatGPT-4o.

[-] ChairmanMeow@programming.dev 7 points 5 months ago

Perhaps it was being influenced by the chat history. But try asking how many r's in raspberry, it does get that consistently wrong for me. And you can ask it those followup questions to easily get it to spout nonsense, and that was mostly my point; figuring out if you're talking to an LLM is fairly trivial.

this post was submitted on 12 Jun 2024
393 points (95.4% liked)

Technology

59670 readers
2433 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS