946
you are viewing a single comment's thread
view the rest of the comments
[-] antonim@lemmy.dbzer0.com 4 points 16 hours ago

to fool into errors

tricking a kid

I've never tried to fool or trick AI with excessively complex questions. When I tried to test it (a few different models over some period of time - ChatGPT, Bing AI, Gemini) I asked stuff as simple as "what's the etymology of this word in that language", "what is [some phenomenon]". The models still produced responses ranging from shoddy to absolutely ridiculous.

completely detached from how anyone actually uses

I've seen numerous people use it the same way I tested it, basically a Google search that you can talk with, with similarly shit results.

[-] archomrade@midwest.social 1 points 16 hours ago

Why do we expect a higher degree of trustworthiness from a novel LLM than we de from any given source or forum comment on the internet?

At what point do we stop hand-wringing over llms failing to meet some perceived level of accuracy and hold the people using it responsible for verifying the response themselves?

Theres a giant disclaimer on every one of these models that responses may contain errors or hallucinations, at this point I think it's fair to blame the user for ignoring those warnings and not the models for not meeting some arbitrary standard.

this post was submitted on 19 Nov 2024
946 points (97.8% liked)

People Twitter

5236 readers
1526 users here now

People tweeting stuff. We allow tweets from anyone.

RULES:

  1. Mark NSFW content.
  2. No doxxing people.
  3. Must be a tweet or similar
  4. No bullying or international politcs
  5. Be excellent to each other.

founded 1 year ago
MODERATORS