55
submitted 1 year ago* (last edited 1 year ago) by jeena@jemmy.jeena.net to c/technology@lemmy.world
top 10 comments
sorted by: hot top controversial new old
[-] Blaster_M@lemmy.world 77 points 1 year ago

AI hallucinates stating it phones home when corrected, user did not wireshark it to confirm.

[-] jeena@jemmy.jeena.net 48 points 1 year ago

Oh man. Ok, TIL that I'm not better than all the old people on Facebook believing what the scammers tell them.

[-] aard@kyu.de 17 points 1 year ago

I find this situation rather entertaining. It shows yet again how important it is to educate people on the basics of how LLM work, including how they are being executed - I'm guessing with just a tiny bit more knowledge it'd also have been obvious nonsense to you.

[-] db0@lemmy.dbzer0.com 45 points 1 year ago* (last edited 1 year ago)

The model is just hallucinating. It has no capacity to execute code on its own and most FOSS clients of course won't send anything back to meta.

[-] Diplomjodler3@lemmy.world 16 points 1 year ago

Damn thing doesn't even know it's running locally. Just ask it. And it can't tell the time.

[-] DarkThoughts@fedia.io 12 points 1 year ago

Another easy test is to ask a question, note the answer, then clear the chat and repeat the same question. Do this over and over again and you'll see varying responses because the majority of it is just made up instead of pooled information from somewhere. A lot of those LLM models are just good for roleplaying purposes. But even the large commercial models that actually were trained on a lot of potentially valuable information have this issue, which is why you should never blindly trust LLM answers.

[-] db0@lemmy.dbzer0.com 3 points 1 year ago

Of course not. They don't have any external info other than what you provide them. They don't know the concept of "running local" at all

[-] wewbull@feddit.uk 21 points 1 year ago

[update] A better headline would have been: AI hallucinates stating it phones home when corrected, user did not wireshark it to confirm.

TIL that I'm not better than all the old people on Facebook believing what the scammers tell them. Let that be a lesson to you, don't use a LLM for fact checking.

[/update]

Fair play on OP. They've updated the story and saved me writing a tirade on how LLMs are not trustworthy.

[-] tedu@azorius.net 15 points 1 year ago

This is just nonsense. The model doesn't even know what program is being run to do the inference.

[-] ObviouslyNotBanana@lemmy.world 1 points 1 year ago

If anything, this was at least entertaining!

this post was submitted on 26 Apr 2024
55 points (80.9% liked)

Technology

73657 readers
2326 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS