116

They get shit on a lot here. Why? What do they do and how is that different from other companies that offer similar services?

What I know of them: they offer DDS brute force/spam protection for websites.

you are viewing a single comment's thread
view the rest of the comments
[-] hedgehog@ttrpg.network 4 points 10 months ago

This reads to me like:

Cloudflare is consistent in their refusal to censor legal free expression by refusing service to those sites. As a result, they serve sites containing offensive, but legal free expression, as well as expression that should be illegal (and may already be - specifically when it comes to). People are mad about this.

To emphasize their refusal to police the content of sites they host, Cloudflare used to simply forward complaints about their customers to those customers. They thought they were making it clear that they were doing this, and maybe they were, but sometimes people miss those sorts of disclaimers and given the subject matter of these complaints, that was a bad process on their part. They haven’t apologized but they have amended their process in the years since.

Did I miss anything?

Now, I get that “free speech absolutist” is a dog whistle for “I’m a white supremacist” thanks to the ex-CEO of a particular social media company, but there’s a difference between

  1. saying it and not doing it, and
  2. actually doing it

And unlike the aforementioned anti-semitic billionaire, Cloudflare is pretty consistent about this. They refuse to block torrent sites as well, and I’ve never heard of them blocking a site that was legal and should have been kept around. (As opposed to immediately blocking the account of the guy who was tracking his personal jet.)

That all said, Cloudflare did eventually cancel the accounts of The Daily Stormer, 8chan, and Kiwi Farms.

I wouldn’t feel as strongly about this if the examples of corporations that do censor speech didn’t show that they’re consistently bad at it. I’m talking social media sites, payment processors, hosts, etc.. If Cloudflare were more willing to censor sites, that would be a bad thing. And they agree:

After terminating services for 8chan and the Daily Stormer, "we saw a dramatic increase in authoritarian regimes attempting to have us terminate security services for human rights organizations — often citing the language from our own justification back to us," write Prince and Starzak in their August 31 blog post.

These past experiences led Cloudflare executives to conclude "that the power to terminate security services for the sites was not a power Cloudflare should hold," write Prince and Starzak. "Not because the content of those sites wasn't abhorrent — it was — but because security services most closely resemble Internet utilities."

To be clear, I’m not saying that social media sites should stop censoring nazis. I’m saying that social media sites are bad at censoring nazis and just as often they censor activists, anti-fascists, and minorities who are literally just venting about oppression, and I see no reason why that would be different at a site level instead.

When you have a site that’s encouraging harassment, hate speech, cyber-bullying, defamation, etc., or engaging in those things directly, that should be a legal issue for the site’s owners. And on that note, my understanding is that there’s a warrant out for Anglin’s arrest and he owes $14 million to one of the women whose harassment he encouraged.

Cloudflare said they’re trying to basically behave like they’re a public utility. They’re strong proponents of net neutrality, which is in line with their actions here. There are reasons to be suspicious of or concerned about Cloudflare, but this isn’t a great example of one.

Side note: It’s funny to me that the comment immediately below yours says that one of the reasons to distrust Cloudflare is because of a concern that they may have been abusing their power (due to effectively being a mitm) and censoring particular kinds of content.

[-] shellsharks@infosec.pub 2 points 10 months ago

A measured response to be sure. Thanks for writing it up. I'm definitely not the one who's going to tell you for sure what CloudFlare should or should not do in this case or any other cases. It's a tricky business to be in in terms of making those decisions. That said, I do think there is a line to be drawn SOMEWHERE, and because of this they would eventually need to deplatform something. If that signals to the regimes of the world that Cloudflare can be influenced than so be it, but to me (and I think a lot of the people who were going after Cloudflare during this time), Nazi's (and those sites you mentioned, e.g. Kiwi Farms) are easy to draw lines for. Good thing I'm just a dude on Lemmy and not a high powered CF exec hah!

[-] hedgehog@ttrpg.network 0 points 10 months ago

You’re welcome, and thanks for the reply!

I think drawing the line at nazis is a good idea in theory, but a very difficult one to implement in practice. For example:

  • If someone doesn’t self ID as a nazi, how do you determine that they are one?
  • What if the site’s owner self IDs as a nazi but this particular website is just a bunch of cooking recipes?
  • Suppose the site owner probably isn’t a nazi, but the site has a bunch of users and a subset of them are creating content that crosses the line, and the site has a hands off approach to content moderation. If the site is 1% nazi content and 99% fine, do you block them entirely unless they agree to remove nazi content? If not, at what threshold does that change? 10%? 51%?
  • Once you’ve done that and they’ve agreed, do you have to establish minimum response times for them to remove nazi content? If the nazi content isn’t taken down until half the site’s daily visitors have seen it, the content moderation isn’t very effective. But if you require them to act too fast, that could result in many people being refused service because of other bad actors.
  • The bad actors aren’t even necessarily nazis. If it’s known that Cloudflare refuses service to sites that leaves nazi content up for more than X amount of time, then it becomes feasible to take down a site that allows comments by registering a bunch of accounts and filling it with so much nazi content that the site’s moderation team can’t handle it in time. How do you prevent this?
  • Do you require them to ban nazis?
  • If they do, but the nazis just register new accounts, do you require them to detect that somehow? Do you have to build that capability and offer it yourself? Now you’re policing individual users. You’re inevitably going to end up stopping Grannie from registering for an account because of someone else - they jumped on her wifi, compromised a device on her network, or something along those lines.

This is all pretty complicated, and I’ve barely scratched the surface.

The revised line they drew with Kiwi Farms (as well as the “we follow US law” line they already had) is a much simpler one that’s still morally defensible:

“We think there is an imminent danger, and the pace at which law enforcement is able to respond to those threats we don’t think is fast enough to keep up.”

One word you used stuck out to me: “deplatform.” I wouldn’t call this deplatforming. I’m used to seeing that word used to refer to someone being removed from social media, having their YouTube channel shut down, having their podcast removed from Spotify, etc.. I mentioned this in another comment on this post, but those situations are fundamentally different, and it follows that the criteria for doing so should be different. In that other comment I also talked a bit about why I think free speech is infringed if you can’t publish a website, but isn’t infringed if you can’t create a Facebook account.

You also might find this Wired article interesting - it has quotes from and background about the CEO of Cloudflare related to the TDS’s removal, some insight into the internal company dialogue when that was all ongoing, etc..

[-] shellsharks@infosec.pub 1 points 10 months ago

I'm taking a bit more literal interpretation of "de-platform", which I agree is not the way it has been traditionally used. In my case, if a platform takes you down, you were just de-platformed =). As for the question of "what is a nazi?", 100% agree in terms of "where is the line". Yes, there are some very obvious cases that I think 100% of people would identify in the same way, but there is undoubtedly that pesky ol' gray area (which as your bulleted list makes clear is a non-trivially large area) where things start to get a little more subjective. Sure, it'd be great if companies (like CloudFlare) smell-tested things in the same way I do haha but outside of that, it is no doubt difficult to define.

this post was submitted on 24 Jan 2024
116 points (96.0% liked)

No Stupid Questions

35868 readers
859 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 1 year ago
MODERATORS