276
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 05 Nov 2023
276 points (96.6% liked)
Technology
73606 readers
1570 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
Part of the problem is who decides what is misinformation. As soon as the state gets to decide what is and isn't true, and thus what can and cannot be said, you no longer have free speech.
The state deciding on speech is a red line yes but that’s not even on the table here. This is about social media moderation. It actually seems really suspiciously disingenuous to bring that up here.
OP: Thread about social media moderation
You: The state deciding what’s true is the death of free speech!
Actually your comment is one of the big problems in this debate. People can’t tell the difference between a private social media firm moderating hate content and the government taking away their freedom of speech. You just slurred the two together yourself by bringing this up here.
Centralized for-profit companies policing speech doesn’t really solve free speech concerns. It doesn’t violate the US first amendment, but corporate-approved speech isn’t really free speech either. No person or organization is really suitable to be the arbiter of truth, but at the same time unmoderated misinformation presents its own problems.
Courtrooms are arbiters of truth literally all the time. There are plenty of laws for which truth is a defence, and dishonesty is punished.
When battling misinformation, the problem is not that lying on the internet is legal - it is still actionable. Fraud is still illegal. False or misleading advertisements are still illegal. Defamation is still illegal. Perjury is illegal in the criminal law sense, not just torts. Ask Martha Stewart who the "arbiter of truth" is.
The problem is that it's functionally impossible to enforce on the scale of social media. If 50,000 people call you a pedophile because it became a meme even though it was completely untrue, and this costs you your job and you start getting death threats, what are you going to do about that? Sue them all?
So we throw up our hands and let corporations handle it through abuse policies, because the actual law is unworkable - it's "this is illegal but enforcing it is so impractical that it's legal". Twitter and Facebook don't have to deal with that crap so we let them do a vague implementation of the law but without the whole "due process" thing and all the justice they can mete out is bans.
If you disagree, then I've got a Nigerian prince who'd like to get your banking info, and also you're all cannibals.