198
Substack says it will not remove or demonetize Nazi content
(www.theverge.com)
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
It is true that removing and demonetising Nazi content wouldn't make the problem of Nazis go away. It would just be moved to dark corners of the internet where the majority of people would never find it, and its presence on dodgy-looking websites combined with its absence on major platforms would contribute to a general sense that being a Nazi isn't something that's accepted in wider society. Even without entirely making the problem go away, the problem is substantially reduced when it isn't normalised.
the weirdest thing to me is these guys always ignore that banning the freaks worked on Reddit--which is stereotypically the most cringe techno-libertarian platform of the lot--without ruining the right to say goofy shit on the platform. they banned a bunch of the reactionary subs and, spoiler, issues with those communities have been much lessened since that happened while still allowing for people to say patently wild, unpopular shit
Yep! Reddit is still pretty awful in many respects (and I only even bother with it for specific communities for which I haven't found a suitable active equivalent on Lemmy - more frogs and bugs on Lemmy please), but it did get notably less unpleasant when the majority of the truly terrible subs were banned. So it does make a difference.
I feel like "don't let perfect be the enemy of good" is apt when it comes to reactionaries and fascists. Completely eliminating hateful ideologies would be perfect, but limiting their reach is still good, and saying "removing their content doesn't make the problem go away" makes it sound like any effort to limit the harm they do is rendered meaningless because the outcome is merely good rather than perfect.
They took way too long unfortunately , but totally agree. thedonald, femaledatingstrategy and fatpeoplehate should have been banned a lot quicker
It feels like they've let it degrade again too now. Last I was on it, lots of subs had gone really toxic and weird
You're literally on a platform that was created to harbor extremist groups. Look at who Dessalines is, (aka u/parentis-shotgun) and their self-proclaimed motivation for writing LemmyNet. When you ban people from a website, they just move to another place, they are not stupid it's pretty easy to create websites. It's purely optical, you're not saving civilisation from harmful ideas, just preventing yourself from seeing it.
you are literally describing an event that induces the sort of entropy we're talking about here--necessarily when you ban a community of Nazis or something and they have to go somewhere else, not everybody moves to the next place (and those people diffuse back into the general population), which has a deradicalizing effect on them overall because they're not just stewing in a cauldron of other people who reinforce their beliefs
"A deradicalising effect"
I'm sorry what? The idea that smaller communities are somehow less radical is absurd.
I think you are unaware (or much more likely willfully ignoring) that communities are primarily dominated by a few active users, and simply viewed with a varying degree of support by non-engaging users.
If they never valued communities enough to stay with them, then they never really cared about the cause to begin with. These aren't the radicals you need to be concerned about.
"And those people diffuse back into the general population"
Because that doesn't happen to a greater degree when exposed to the "general population" on the same website?
i'd like you to quote where i said this--and i'm just going to ignore everything else you say here until you do, because it's not useful to have a discussion in which you completely misunderstand what i'm saying from the first sentence.
The deradicalizing effect occurs in the people who do not follow the fringe group to a new platform.
Many people lurk on Reddit who will see extremist content there and be influenced by it, but who do not align with the group posting it directly, and will not seek them out after their subreddit or posted content is banned.
Sure but what degree of influence is actually "radicalising" or a point of concern?
We like to pretend that by banning extreme communities we are saving civilisation from them. But the fact is that extreme groups are already rejected by society. If your ideas are not actually somewhat adjacent to already held beliefs, you can't just force people to accept them.
I think a good example of this was the "fall" of Richard Spencer. All the leftist communities (of which I was semi-active in at the time) credited his decline with the punch he received and apparently assumed that it was the act of punching that resulted in his decline, and used it to justify more violent actions. The reality is that Spencer just had a clique of friends that the left (and Spencer himself) interpreted as wide support and when he was punched the greater public didn't care because they never cared about him.
Whom are we talking about here, the ones who get kicked out and seek each other in a more concentrated form, or the ones who are left behind without the radicalizing agents?
I don't want to have to deal with Nazis, or several other sects, but I don't think forcing them into a smaller echo chamber is helping either.
Ideally, I think a social platform should lure radicalizing agents, then expose them to de-radicalizing ones, without exposing everyone else. Might be a hard task to achieve, but worth it.
You really think this works? I don't. I just see them souring the atmosphere for everyone and attracting more mainstream users to their views.
We've seen in Holland how this worked out. The nazi party leader (who chanted "Less Moroccans") won the elections by a landslide a month ago. There is a real danger of disenchanted mainstreamers being attracted to nazi propaganda in droves. We're stuck with them now for 4 years (unless they manage to collapse on their own, which I do hope).
No, that's why I said "Ideally", meaning it as a goal.
I don't think we have the means to do it yet, or at least I don't know of any platform working like that, but I have some ideas of how some of it could be done. Back in the days of Digg, with some people, we spitballed some ideas for social networks, among them a movie ranking one (that turned out to be a flop because different people would categorize films differently), and a kind of PageRank for social networks, that back then was computationally impractical. But with modern LLMs running trillions of parameters, and further hardware advances, even O(n²) with n=millions becomes feasible in real time, and in practice it wouldn't need to do nearly that much work. With the right tuning, and dynamic message visibility, I think something like that could create the exact echo chambers that would attract X people, allow in des-X people, while keeping everyone else out and unbothered.
Of course there is a dark side, in that a platform could use the same strategy to mold the opinion of any group... and I wouldn't be surprised to learn that Meta had been doing exactly that.
I'd argue that it still broke Reddit.
Back in the day, I might say something out of tone in some subreddit, get the comment flagged, discuss it with a mod, and either agree to edit it or get it removed. No problem.
Then Reddit started banning reactionary subs, subs started using bots to ban people for even commenting on other blacklisted subs, subs started abusing automod to ban people left and right, even quoting someone to criticize them started counting as using the same "forbidden words", conversations with mods to clear stuff up pretty much disappeared, application of modern ToS retroactively to 10 year old content became a thing... until I got permabanned from the whole site after trying to recur a ban, with zero human interaction. Some months later, while already banned sitewide, they also banned me from some more subs.
Recently Reddit revealed a "hidden karma" feature to let automod pre-moderate potentially disruptive users.
Issues with the communities may have lessened, but there is definitely no longer the ability to say goofy, wild, or unpopular stuff... or in some cases, even to criticize them. There also have been an unknown number of "collateral damage" bans, that Reddit doesn't care about anymore.
imo if reddit couldn't survive "purging literally its worst elements, which included some of the most vehement bigotry and abhorrent content outside of 4chan" it probably doesn't deserve to survive
I see it as a cautionary tale about relying too much on automated mod tools to deal with an overwhelming userbase. People make mistakes, simple tools make more.
@jarfil @alyaza i have said plenty of wild stuff and haven't been banned from any subs? None of it has been bigoted tho
The only time I got banned for bigoted stuff, was precisely for quoting someone's n-word and calling them out on it. Automod didn't care about the context, no human did either. Also got banned for getting carried away and making a joke in a "no jokes" (zero tolerance) sub. Several years following the rules didn't grant me even a second chance. Then was the funny time when someone made me a mod of a something-CCP sub, and automatically several other subs banned me.
There is a lot more going on Reddit than what meets the eye, and they like to keep it out of sight.
It sounds like the right call was made (as long as both you and the OP were banned). As a white person, there is no reason for you to use the n-word. In that situation simply changing it to "n-word" is the very least that could have been done
I'm not really sure how that provides and example of stuff going on in the background that someone wants to keep out of sight.
The thing is I did not "use" it, just quoted their whole message. In hindsight, maybe I should have changed it, but I still find it a flaw to not take context into account.
It provides an example of context-less rules blindly applied by a machine, with no public accountability of what happened, much less of the now gone context.
There are many better ways of handling those cases, like flagging the comment with a content warning, maybe replacing the offensive words, or locking it for moderation, instead of disappearing everything. I didn't have half a chance of fixing things, had to use reveddit to just guess what I might've done wrong.
The thing is, no context would have made it OK. You may have just been quoting someone, but you still used the word in the quote. Quotes are not some uneditable thing, so it was your choice to leave it in. Zero tolerance for hate means repeating the hateful thing is also not tolerated, and that, IMO, is a good thing and the perfect use of an auto-mod.
The other examples are a bit nebulous, and I have no doubt that communities on reddit have esoteric moderation guidelines, but this particular example seems pretty cut and dry.
Quotes are not uneditable... but neither are comments.
Wouldn't be the first time when the parent gets edited to make a reply look like nonsense, so I got used to quoting as a countermeasure. Then they unlocked comment editing even in 10 year old "archived" posts 🤦 (BTW, the same applies to Lemmy: should I quote you? will you edit what you said?... tomorrow, or in 10 years?... maybe I'll risk it, this time)
"Zero tolerance" becomes a problem when the system requires you to quote, but then some months or years later decides to change the rules and applies them retroactively. I still wouldn't mind if they just flagged, hid, or removed the comment, it's the "go on a treasure hunt to find out why you got banned" that I find insulting (kind of like the "wrong login"... /jk, you got banned. Wonder if it's been fixed in Lemmy already, I know of some sites that haven't for the last 15 years).
You kinda get into an ouroborus of who has fewer edits, and honestly I don't know how to solve for that, but I do know that if you had substituted "n-word" for the slur it would look exactly the same if the OP edited the comment after the fact. Quoting the slur doesn't mitigate that.
Any policy becomes a problem at that point. It becomes less of a policy and more of a guideline