30
submitted 17 hours ago* (last edited 17 hours ago) by dgerard@awful.systems to c/sneerclub@awful.systems

this is Habryka talking about how his moderating skills are so powerful it takes lesswrong three fucking years to block a poster who's actively being a drain on the site

here's his reaction to sneerclub (specifically me - thanks Oliver!) calling LessOnline "wordy racist fest":

A culture of loose status-focused social connection. Fellow sneerers are not trying to build anything together. They are not relying on each other for trade, coordination or anything else. They don't need to develop protocols of communication that produce functional outcomes, they just need to have fun sneering together.

He gets us! He really gets us!

top 46 comments
sorted by: hot top controversial new old
[-] swlabr@awful.systems 3 points 1 hour ago* (last edited 1 hour ago)

That it took this long to ban this guy and this many words is so delicious. What a failure of a community. What a failure in moderation.

Based on the words and analogies in that post: participating in LW must be like being in a circlejerk where everyone sucks at circlejerking. Guys like Said run around the circle yelling at them about how their technique sucks and that they should feel bad. Then they chase him out and continue to be bad at mutual jorkin.

E: That they don’t see the humor in sneering at “celebrating blogging” and that it’s supposedly us at our worst is very funny.

[-] dgerard@awful.systems 3 points 1 hour ago

you can tell the real problem was I called them racist

[-] diz@awful.systems 4 points 4 hours ago* (last edited 3 hours ago)

Lol I literally told these folks, something like 15 years ago, that paying to elevate a random nobody like Yudkowsky as the premier “ai risk” researcher, in so much that there is any AI risk, would only increase it.

Boy did I end up more right on that than my most extreme imagination. All the moron has accomplished in life was helping these guys raise cash due to all his hype about how powerful the AI would be.

The billionaires who listened are spending hundreds of billions of dollars - soon to be trillions, if not already - on trying to prove Yudkowsky right by having an AI kill everyone. They literally tout “our product might kill everyone, idk” to raise even more cash. The only saving grace is that it is dumb as fuck and will only make the world a slightly worse place.

[-] froztbyte@awful.systems 3 points 3 hours ago* (last edited 3 hours ago)

some UN-associated ACM talk I was listening to recently had someone cite a number at (iirc) ~~$1.5tn total estimated investment~~ $800b[0]. haven't gotten to fact-check it but there's a number of parts of that talk I wish to write up and make more known

one of the people in it made some entirely AGI-pilled comments, and it's quite concerning

this talk; looks like video is finally up on youtube too (at the time I yanked it by pcap-ing a zoom playout session - turns out zoom recordings are hella aggressive about not being shared)

the question I asked was:

To Csaba (the current speaker): it seems that a lot of the current work you're engaged in is done presuming that AGI is a certainty. what modelling you have you done without that presumption?

response is about here

[0] edited for correctness; forget where I saw the >$1.5t number

[-] dgerard@awful.systems 4 points 3 hours ago

hearing him respond like that in real time and carefully avoiding the point makes clear the attraction of ChatGPT

[-] self@awful.systems 16 points 12 hours ago

from the (extensive) footnotes:

Occupy Wallstreet strikes me as another instance of the same kind of popular sneer culture. Occupy Wallstreet had no coherent asks, no worldview that was driving their actions.

it’s so easy to LessWrong: just imagine that your ideological opponents have no worldview and aren’t trying to build anything, sprinkle in some bullshit pseudo-statistics, and you’re there!

[-] Amoeba_Girl@awful.systems 12 points 13 hours ago* (last edited 13 hours ago)

A small sidenote on a dynamic relevant to how I am thinking about policing in these cases:

A classical example of microeconomics-informed reasoning about criminal justice is the following snippet of logic.

If someone can gain in-expectation X dollars by committing some crime (which has negative externalities of Y>X dollars), with a probability p of getting caught, then in order to successfully prevent people from committing the crime you need to make the cost of receiving the punishment (Z) be greater than X/p, i.e. X<p∗Z.

Or in less mathy terms, the more likely it is that someone can get away with committing a crime, the harsher the punishment needs to be for that crime.

In this case, a core component of the pattern of plausible-deniable aggression that I think is present in much of Said's writing is that it is very hard to catch someone doing it, and even harder to prosecute it successfully in the eyes of a skeptical audience. As such, in order to maintain a functional incentive landscape the punishment for being caught in passive or ambiguous aggression needs to be substantially larger than for e.g. direct aggression, as even though being straightforwardly aggressive has in some sense worse effects on culture and norms (though also less bad effects in some other ways), the probability of catching someone in ambiguous aggression is much lower.

Fucking hell, that is one of the stupidest most dangerous things I've ever heard. Guy solves crime by making the harshness of punishment proportional to the difficulty of passing judgement. What could go wrong?

[-] ndevenish@mas.to 4 points 3 hours ago

@Amoeba_Girl @sneerclub isn’t this exactly the same “logic” that escalated the zizians to multiple murders?

[-] sailor_sega_saturn@awful.systems 7 points 10 hours ago

"So, what are you in for?" "Making a right turn on a bicycle without signalling continuously for the last 100 feet before the turn in violation of California Vehicle Code 22108"

[-] blakestacey@awful.systems 5 points 9 hours ago

"...And creatin' a nuisance"

[-] istewart@awful.systems 6 points 10 hours ago

Hmm, yes, I must develop a numerical function to determine whether or not somebody doesn't like me...

One thing he gets is that direct aggression is definitely more effective in this situation. I can, and do, tell these people to fuck straight off, and my life is better for it!

[-] dgerard@awful.systems 12 points 14 hours ago

Indeed, the LinkedIn attractor appears to be the memetically most successful way groups relate to their ingroup members, while the sneer attractor governs how they relate to their outgroups.

AND OLIVER COMES IN FROM THE TOP ROPE WITH THE HOTDOG COSTUME

[-] dgerard@awful.systems 11 points 13 hours ago* (last edited 13 hours ago)

Moderators need the authority to, at some level, police the vibe of your comments, even without a fully mechanical explanation of how that vibe arises from the specific words you chose.

hey everyone i am going to become top mod on this forum, now let me just reinvent human interaction from first principles

[-] Amoeba_Girl@awful.systems 10 points 14 hours ago

Jesus christ, just ban the guy! Don't write a million words about how much he gets under your skin! Rude!!!!

[-] Soyweiser@awful.systems 12 points 14 hours ago* (last edited 14 hours ago)

How it started: gonna build the robotgod but nice

How it went: wow we need to teach people how to think.

How it ended: we cant do basic things people have done since we decided to walk upright because some people are mean.

Even 4chan can trade/coordinate/and have functional outcomes, sure often for evil. But most of us are not even active on lw. Skill issue. If you cant beat a bunch of sneerders who are not even participating what chance do they have against the godAI (same with not being able to convince one human).

eponymous sneerlcub

Eponymous even. Guess they don't know who named sneerclub.

Some more sneering, why use make footnotes like you are actually linking to proof, it is done all over the place and it tricks me a lot 'ah they backed it up with a source' nope some random footnote which is so wordy it breaks the site on mobile.

the more likely it is that someone can get away with committing a crime, the harsher the punishment needs to be for that crime.

The death penalty of not just you but your whole family if you copy that floppy.

Now, does that mean that everyone is free to vote however they want?

The answer is a straightforward "no".

Holy shit, hahaha what are you wasting your time on...

Scrolled down to the comments:

How would this situation play out in a world like dath ilan? A world where The Art has progressed to something much more formidable.

Yeah indeed how would they solve these problems in Ravenloft.

[-] fullsquare@awful.systems 5 points 5 hours ago

The death penalty of not just you but your whole family if you copy that floppy.

thermonuclear ballistic missile on lightcone infra for all the time and brains they have wasted

[-] Soyweiser@awful.systems 2 points 1 hour ago

With apologies to Stross: "you shall not copy floppies in my lightcone"

[-] BlueMonday1984@awful.systems 5 points 5 hours ago

Even 4chan can trade/coordinate/and have functional outcomes, sure often for evil.

To give a rather notorious example, there's the He Will Not Divide Us flag in 2017, which the 'channers tracked down after only 38 hours, despite Shia LeBouf's attempts to keep the location hidden.

The death penalty of not just you but your whole family if you copy that floppy.

The future media conglomerates want. (okay maybe not the "death penalty" part - dead people don't make money)

[-] Soyweiser@awful.systems 2 points 1 hour ago

Re the flag.

Not just that, but they also on a less malicious case id4chan, and now id6chan were also 4chan productions iirc. (With others from the internet also helping). Which documents all kinds of strange warhammer lore, the /tg/ interpretation of that and their various hate for certain authors of the games. For example https://1d6chan.miraheze.org/wiki/Robin_Cruddace

[-] BlueMonday1984@awful.systems 1 points 43 minutes ago

The flag was the most obvious one I could think of, given how many eyes were already on HWNDU and how swiftly they found it. In retrospect, I should've chosen 1d4chan/1d6chan as my example, given how large and robust it is as a wiki.

The SCP Foundation arguably qualifies as well - it began on /x/ as a random post, before morphing into the ongoing collaborative writing project we all know and love.

[-] jonhendry@awful.systems 6 points 7 hours ago

Eponymous even. Guess they don’t know who named sneerclub.

Mister Sneerclub of the Newport Sneerclubs, of course.

[-] Soyweiser@awful.systems 4 points 5 hours ago
[-] dgerard@awful.systems 5 points 4 hours ago
[-] Soyweiser@awful.systems 3 points 1 hour ago

Only for friends, so we should call him Mister Sneerclub. Or Her Sneer if you are German and want to be informal.

[-] scruiser@awful.systems 13 points 13 hours ago* (last edited 13 hours ago)

we cant do basic things

That's giving them too much credit! They've generated the raw material for all the marketing copy and jargon pumped out by the LLM companies producing the very thing they think will doom us all! They've served a small but crucial role in the influence farming of the likes of Peter Thiel and Elon Musk. They've served as an entry point to the alt-right pipeline!

dath ilan?

As a self-certified Eliezer understander, I can tell you dath ilan would open up a micro-prediction market on various counterfactual ban durations. Somehow this prediction market would work excellently despite a lack of liquidity and multiple layers of skewed incentives that should outweigh any money going into it. Also Said would have been sent to a ~~reeducation camp~~, quiet city and ~~sterilized~~ denied UBI if he reproduces for not conforming to dath ilan's norms much earlier.

[-] blakestacey@awful.systems 15 points 15 hours ago* (last edited 15 hours ago)

From the comments:

If Said returns, I'd like him to have something like a "you can only post things which Claude with this specific prompt says it expects to not cause " rule, and maybe a LLM would have the patience needed to show him some of the implications and consequences of how he presents himself.

And:

Couldn't prediction markets solve this?

Ain't enough lockers in the world, dammit

[-] blakestacey@awful.systems 18 points 16 hours ago

Of course, commenters on LessWrong are not dumb, and have read Scott Alexander,

It's like sneering at fish in an aquarium

[-] blakestacey@awful.systems 18 points 16 hours ago

"They don't need to develop protocols of communication that facilitate buying castles, fluffing our corporate overlords, or recruiting math pets. They share vegan recipes without even trying to build a murder cult."

[-] flowerysong@awful.systems 10 points 15 hours ago

Here's a vegan gumbo I made for Thanksgiving a couple years back.

[-] blakestacey@awful.systems 7 points 13 hours ago* (last edited 13 hours ago)

I've never tried a Pyrex roux before. I'll have to give that a shot. Often, I use our Pyrexen to rehydrate textured vegetable protein. Scoop a couple cups from the giant box in the pantry, add a couple teaspoons of stock concentrate (e.g., the Better Than Bouillon veggie and roasted garlic flavors), add water until the granules start floating, stir, microwave 30 seconds, stir, microwave another 30 seconds. Then it's ready for skillet-frying with whatever spices and other flavorings seem appropriate in the moment. Chili powder, red pepper flakes, cumin, oregano and a dash of cocoa powder makes for a good Tex-Mex flavor profile that can sub for ground beef in tacos, enchiladas, etc. Soy sauce, mirin and sugar or agave is a straightforward teriyaki. It's pretty versatile stuff.

The Totole "Granulated Chicken Flavor Soup Base Mix" is another good flavor boost.

[-] blakestacey@awful.systems 7 points 16 hours ago* (last edited 16 hours ago)
[-] dgerard@awful.systems 6 points 14 hours ago
[-] blakestacey@awful.systems 6 points 9 hours ago

Come to the Sneer Attractor, we have brownies

[-] scruiser@awful.systems 10 points 15 hours ago* (last edited 13 hours ago)

I'm feeling an effort sneer...

For roughly equally long have I spent around one hundred hours almost every year trying to get Said Achmiz to understand and learn how to become a good LessWrong commenter by my lights.

Every time I read about a case like this my conviction grows that sneerclub's vibe based moderation is the far superior method!

The key component of making good sneer club criticism is to never actually say out loud what your problem is.

We've said it multiple times, it's just a long list that is inconvenient to say all at once. The major things that keep coming up: The cult shit (including the promise of infinite AGI God heaven and infinite Roko's Basilisk hell; and including forming high demand groups motivated by said heaven/hell); the racist shit (including the eugenics shit); the pretentious shit (I could actually tolerate that if it didn't have the other parts); and lately serving as crit-hype marketing for really damaging technology!

They don't need to develop protocols of communication that produce functional outcomes

Ahem... you just admitted to taking a hundred hours to ban someone, whereas dgerad and co kick out multiple troublemakers in our community within a few hours tops each. I think we are winning on this one.

For LessWrong to become a place that can't do much but to tear things down.

I've seen some outright blatant crank shit (as opposed to the crank shit that works hard to masquerade as more legitimate science) pretty highly upvoted and commented positively on lesswrong (GeneSmith's wild genetic engineering fantasies come to mind).

[-] blakestacey@awful.systems 8 points 8 hours ago* (last edited 7 hours ago)

The key component of making good sneer club criticism is to never actually say out loud what your problem is.

I wrote 800 words explaining how TracingWoodgrains is a dishonest hack, when I could have been getting high instead.

But we don't need to rely on my regrets to make this judgment, because we have a science-based system on this ~~podcast~~ instance. We can sort all the SneerClub comments by most rated. Nothing that the community has deemed an objective banger is vague.

[-] Soyweiser@awful.systems 4 points 6 hours ago* (last edited 6 hours ago)

The problem is they dont read sneerclub well, so they dont realize we dont relitigate the same shit every time. So when they come in with their hammers (prediction markets, being weird about ai, etc) we just go 'lol, these nerds' and dont go writing down the same stuff every time. As the community has a shared knowledge base, they do the same by not going into details every time how a prediction market would help and work. But due to their weird tribal thinking and thinking they are superior they think when we do it it is bad.

It is just amazing how much he doesn't get basic interactions. And not like we dont like to explain stuff when new people ask about it. Or often when not even asked.

Think one of the problems with lw is that they think stuff that is long, is well written and argued, even better if it used a lot of complex sounding words. see how they like Chris Langan as you mentioned. Just a high rate of 'I have no idea what he is talking about but it sounds deep' shit.

To quote from the lw article you linked on the guy

CTMU has a high-IQ mystique about it: if you don't get it, maybe it's because your IQ is too low. The paper itself is dense with insights, especially the first part.

Makes you wonder how many people had a formal academic education, as one of the big things of that is that it has none of this mystique, as it build on top of each other and often can feel reasonable easy and making sense. (Because learning the basics preps you for the more advanced stuff, which is not to say this is the case every time, esp if some of your skills are lacking, but none of this high-IQ mystique (which also seems the utter wrong thing to look for)).

[-] blakestacey@awful.systems 8 points 12 hours ago

I’ve seen some outright blatant crank shit (as opposed to the crank shit that works hard to masquerade as more legitimate science) pretty highly upvoted and commented positively on lesswrong (GeneSmith’s wild genetic engineering fantasies come to mind).

Their fluffing Chris Langan is the example that comes to mind for me.

[-] o7___o7@awful.systems 15 points 14 hours ago* (last edited 14 hours ago)

Ya don't debate fascists, ya teach them the lesson of history. The Official Sneerclub Style Manual indicates that this is accomplished with various pedagogical tools, including laconic mockery, administrative trebuchets, and socks with bricks in them.

[-] scruiser@awful.systems 11 points 13 hours ago

That too.

And judging by how all the elegantly charitably written blog posts on the EA forums did jack shit to stop the second manifest conference from having even more racists, debate really doesn't help.

[-] blakestacey@awful.systems 6 points 14 hours ago
[-] scruiser@awful.systems 3 points 13 hours ago

Yes, thanks. I always forget how many enters i need to hit.

[-] o7___o7@awful.systems 12 points 16 hours ago* (last edited 14 hours ago)

I, an anonymous man from the internet who called Peter Thiel a racist hotdog, am the one with real power.

[-] jonhendry@awful.systems 3 points 8 hours ago

You might need to update that to "racist wax hotdog" judging from his appearance lately.

[-] Soyweiser@awful.systems 7 points 14 hours ago

It is very important we do not congratulate you over this, or we will become linkedin!

[-] TinyTimmyTokyo@awful.systems 11 points 16 hours ago
this post was submitted on 23 Aug 2025
30 points (100.0% liked)

SneerClub

1183 readers
72 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS