There's a difference between moderation and censorship of "bad words". I can call for the violent genocide of another people without a single "bad word", but I can also praise humanity using nothing but curses.
The underlying problem is, that moderation serves the interests of advertisers, not people. KFC will happily advertise between calls to genocide in a third world country- it's finger chopping good. But if someone says fuck next to a coke bottle, the apocalypse is near.
"Holy shit, humanity, you foul-mouthed bunch of magnificent misfits! Against all the goddamn odds, you've managed to build wonders that would make even the universe blush with envy! You've taken the raw sewage of existence and spun it into the finest silk of civilization, you sons of bitches! Your resilience is tougher than a two-dollar steak, and your ingenuity is sharper than a box of goddamn razor blades! You've danced with death, spat in the face of despair, and flipped the bird to fate itself! You're a bunch of crazy, chaotic, beautiful motherfuckers, and I wouldn't have it any other way! Keep on raising hell and blazing trails, you magnificent bastards!"
That doesn't sound like an underlying problem that sounds like a coincidence and like greedy idjits doing the thing in the wrong way
Like yeah, content that doesn't call for genocide is also more advertiser friendly in addition to being safer for marginalized communities in the real world.
I think the solution is a moderating format that involves more open source work, assign accounts a value that weighs their reports by how serious the accusation is, but also by how often they've turned up credible reports in the past, have the report come with three parts, the reported post or comment, the rule or rules the reporter think it violates, and a text box for them to give some explanation of why they think it breaks the rules in case it isn't obvious.
Once someone has a high enough rating and volume for reports, earmark them to become a paid moderator.
Und voilà, you have home grown community moderators familiar with the rules and with sneaky ways bad actors try to get around them, along with a process to quickly get more whenever you launch in a new community.
There's a difference between moderation and censorship of "bad words". I can call for the violent genocide of another people without a single "bad word", but I can also praise humanity using nothing but curses.
The underlying problem is, that moderation serves the interests of advertisers, not people. KFC will happily advertise between calls to genocide in a third world country- it's finger chopping good. But if someone says fuck next to a coke bottle, the apocalypse is near.
I'm throwing down the challenge flag on this one.
Motherfucking badass bitches.
Hear, hear.
Ground rules: can I use pronouns and prepositions?
Absolutely. Let's put it this way: even if you break the "rules" I still expect to get a good laugh out of this.
An Australian Zoomer:
You cunts fuck!
Asskicking motherfuckers. Skullfucking the fuck out of this fucking shitfest.
Here's ChatGPT's take:
"Holy shit, humanity, you foul-mouthed bunch of magnificent misfits! Against all the goddamn odds, you've managed to build wonders that would make even the universe blush with envy! You've taken the raw sewage of existence and spun it into the finest silk of civilization, you sons of bitches! Your resilience is tougher than a two-dollar steak, and your ingenuity is sharper than a box of goddamn razor blades! You've danced with death, spat in the face of despair, and flipped the bird to fate itself! You're a bunch of crazy, chaotic, beautiful motherfuckers, and I wouldn't have it any other way! Keep on raising hell and blazing trails, you magnificent bastards!"
Kinda meh...
That doesn't sound like an underlying problem that sounds like a coincidence and like greedy idjits doing the thing in the wrong way
Like yeah, content that doesn't call for genocide is also more advertiser friendly in addition to being safer for marginalized communities in the real world.
I think the solution is a moderating format that involves more open source work, assign accounts a value that weighs their reports by how serious the accusation is, but also by how often they've turned up credible reports in the past, have the report come with three parts, the reported post or comment, the rule or rules the reporter think it violates, and a text box for them to give some explanation of why they think it breaks the rules in case it isn't obvious.
Once someone has a high enough rating and volume for reports, earmark them to become a paid moderator.
Und voilà, you have home grown community moderators familiar with the rules and with sneaky ways bad actors try to get around them, along with a process to quickly get more whenever you launch in a new community.