12
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 25 Apr 2025
12 points (92.9% liked)
Technology
71089 readers
1761 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
Ok, but the point is that lots of people would just say something and then figure out if it's right later.
Quite frankly, you sound like middle school teachers being hysterical about Wikipedia being wrong sometimes.
LLMs are already being used for policy making, business decisions, software creation and the like. The issue is bigger than summarisers, and "hallucinations" are a real problem when they lead to real decisions and real consequences.
If you can't imagine why this is bad, maybe read some Kafka or watch some Black Mirror.
The use of LLMs for ppolicy making is probably an obfuscation technique to complicate later court challenges. If we still have courts by then.