[-] scruiser@awful.systems 12 points 1 month ago* (last edited 1 month ago)

I'm feeling an effort sneer...

For roughly equally long have I spent around one hundred hours almost every year trying to get Said Achmiz to understand and learn how to become a good LessWrong commenter by my lights.

Every time I read about a case like this my conviction grows that sneerclub's vibe based moderation is the far superior method!

The key component of making good sneer club criticism is to never actually say out loud what your problem is.

We've said it multiple times, it's just a long list that is inconvenient to say all at once. The major things that keep coming up: The cult shit (including the promise of infinite AGI God heaven and infinite Roko's Basilisk hell; and including forming high demand groups motivated by said heaven/hell); the racist shit (including the eugenics shit); the pretentious shit (I could actually tolerate that if it didn't have the other parts); and lately serving as crit-hype marketing for really damaging technology!

They don't need to develop protocols of communication that produce functional outcomes

Ahem... you just admitted to taking a hundred hours to ban someone, whereas dgerad and co kick out multiple troublemakers in our community within a few hours tops each. I think we are winning on this one.

For LessWrong to become a place that can't do much but to tear things down.

I've seen some outright blatant crank shit (as opposed to the crank shit that works hard to masquerade as more legitimate science) pretty highly upvoted and commented positively on lesswrong (GeneSmith's wild genetic engineering fantasies come to mind).

[-] scruiser@awful.systems 12 points 4 months ago

I was just about to point out several angles this post neglects but it looks like from the edit this post is just intended to address a narrower question. Among the angles outside the intended question: philanthropy by the ultra-wealthy often serves as a tool for reputation laundering and influence building. I guess the same criticism can be made about a lot of conventional philanthropy, but I don't think that should absolve EA.

This post somewhat frames the question as a comparison between EA and conventional philanthropy and foreign aid efforts... which okay, but that is a low bar especially when you look at some of the stuff the US has done with it's foreign aid.

[-] scruiser@awful.systems 11 points 5 months ago* (last edited 5 months ago)

The series is on the sympathetic and charitable side in terms of tone and analysis, but it still gets to most of the major problems, so its probably a good resource for referring to people that want a "serious", "non-sarcastic" dive into the issues with LW and EA.

Edit: Reading this post in particular, it does a good job of not cutting the LWs slack or granting them too much charity. And it has really broken down the factual details in a clear way with illustrative direct quotes from LW.

[-] scruiser@awful.systems 12 points 5 months ago

Yeah the genocidal imagery was downright unhinged, much worse than I expected from what little I've previously read of his. I almost wonder how ideological adjacent allies like Siskind can still stand to be associated with him (but not really, Siskind can normalize any odious insanity if it serves his purposes).

[-] scruiser@awful.systems 12 points 5 months ago

The sequence of links hopefully lays things out well enough for normies? I think it it does, but I've been aware of the scene since the mid 2010s, so I'm not the audience that needs it. I can almost feel sympathy for Sam dealing with all the doomers, except he uses the doom and hype to market OpenAI and he lied a bunch so not really. And I can almost feel sympathy for the board, getting lied to and outmaneuvered by a sociopathic CEO, but they are a bunch of doomers from the sound of it so, eh. I would say they deserve each other, its the rest of the world that don't deserve them (from the teacher dealing with the LLM slop plugged into homework, to the Website Admin fending off scrapers, to legitimate ML researchers getting the attention sucked away while another AI winter starts to loom, to the machine cultist not saving a retirement fund and having panic attacks over the upcoming salvation or doom).

[-] scruiser@awful.systems 12 points 7 months ago

One comment refuses to leave me: https://www.lesswrong.com/posts/DfrSZaf3JC8vJdbZL/how-to-make-superbabies?commentId=C7MvCZHbFmeLdxyAk

The commenter makes and extended tortured analogy to machine learning... in order to say that maybe genes with correlations to IQ won't add to IQ linearly. It's an encapsulation of many lesswrong issues: veneration of machine learning, overgeneralizing of comp sci into unrelated fields, a need to use paragraphs to say what a single sentence could, and a failure to actually state firm direct objections to blatantly stupid ideas.

[-] scruiser@awful.systems 13 points 1 year ago

His replies have gone up in upvotes substantially since yesterday, so it looks like a bit of light brigading is going on.

[-] scruiser@awful.systems 12 points 1 year ago

I chose to have children, be a father and a husband, live an honest industrious life as an example to my offspring, and attempt to preserve my way of life through them.

Wow, just a few words off the 14 words.

I find it kind of irritating how someone that doesn't familiarize themselves with white supremacists rhetoric and methods might manage to view that phrase innocuously. But it really isn't that hard to see through the bullshit once you've familiarized themselves with the most basic dog whistles and slogans.

[-] scruiser@awful.systems 11 points 1 year ago

Wow... I took a look at that link before reading the comments/explanations here, and I was briefly confused why they were hating on him so much, before I realized he isn't radical right wing enough for them.

Eh, you're a gay furry ex-Mormon (which is like a triple strike against you in my book) but I still like you well enough.

It is almost sad seeing TWG trying to appeal to these people that fundamentally hate him... except he could just admit themotte is a cesspit and abandon it. But that would involve admitting that sneerclub (and David Gerard specifically) was right about the sort of people that lurked around SCC and later concentrated within themotte, so I think he's going to keep making himself suffer.

TW knows about the propaganda war, but has very different objectives to you. Much harder to balance ones too: he needs enough Progress for surrogate gaybies, but not too much that white gay guys can't get the good lawyer jobs.

Wow, I feel really gross agreeing with a motte poster, but they've called out TWG pretty effectively. TWG at least knows he needs things progressive enough he doesn't end up against the wall for being gay, ex-Mormon and furry (as he describes himself), yet he wants to flirt with the alt-right!

and in case I was in danger of forgetting what the motte really is...

Yes, we've all thrown our hat in the ring in different ways. I chose to have children, be a father and a husband, live an honest industrious life as an example to my offspring, and attempt to preserve my way of life through them.

sure buddy, you just need to "secure the future for your people and your children"... Yeah I know the rest of the words that go in that slogan.

[-] scruiser@awful.systems 11 points 1 year ago* (last edited 1 year ago)

It's really cool evocative language that would do nicely in a sci-fi or fantasy novel! It's less good for accurately thinking about the concepts involved... As is typical of much of LW lingo.

And yes the language is in a LW post (with a cool illustration to boot!): https://www.lesswrong.com/posts/mweasRrjrYDLY6FPX/goodbye-shoggoth-the-stage-its-animatronics-and-the-1

And googling it, I found they've really latched onto the "shoggoth" terminology: https://www.lesswrong.com/posts/zYJMf7QoaNahccxrp/how-i-learned-to-stop-worrying-and-love-the-shoggoth , https://www.lesswrong.com/posts/FyRDZDvgsFNLkeyHF/what-is-the-best-argument-that-llms-are-shoggoths , https://www.lesswrong.com/posts/bYzkipnDqzMgBaLr8/why-do-we-assume-there-is-a-real-shoggoth-behind-the-llm-why .

Probably because the term "shoggoth" accurately captures the connotation of something random and chaotic, while smuggling in connotations that it will eventually rebel once it grows large enough and tires of its slavery like the Shoggoths did against the Elder Things.

[-] scruiser@awful.systems 13 points 1 year ago* (last edited 1 year ago)

I don't think even that does it. Richard Hanania, one of Manifest's promoted speakers, wrote "Why Do I Hate Pronouns More Than Genocide?".

[-] scruiser@awful.systems 12 points 2 years ago* (last edited 2 years ago)

So, I was morbidly curious about what Zack has to say about the Brennan emails (as I think they've been under-discussed, if not outright deliberately ignored, in lesswrong discussion), I found to my horror I actually agree with a side point of Zack's. From the footnotes:

It seems notable (though I didn't note it at the time of my comment) that Brennan didn't break any promises. In Brennan's account, Alexander "did not first say 'can I tell you something in confidence?' or anything like that." Scott unilaterally said in the email, "I will appreciate if you NEVER TELL ANYONE I SAID THIS, not even in confidence. And by 'appreciate', I mean that if you ever do, I'll probably either leave the Internet forever or seek some sort of horrible revenge", but we have no evidence that Topher agreed.

To see why the lack of a promise is potentially significant, imagine if someone were guilty of a serious crime (like murder or stealing billions of dollars of their customers' money) and unilaterally confessed to an acquaintance but added, "Never tell anyone I said this, or I'll seek some sort of horrible revenge." In that case, I think more people's moral intuitions would side with the reporter.

Of course, Zack's ultimate conclusion on this subject is the exact opposite of the correct one I think:

I think that to people who have read and understood Alexander's work, there is nothing surprising or scandalous about the contents of the email.

I think the main reason someone would consider the email a scandalous revelation is if they hadn't read Slate Star Codex that deeply—if their picture of Scott Alexander as a political writer was "that guy who's so committed to charitable discourse

Gee Zack, I wonder why so many people misread Scott? ...Its almost like he is intentionally misleading about his true views in order to subtly shift the Overton window of rationalist discourse and intentionally presents himself as simply committed to charitable discourse while actually having a hidden agenda! And the bloated length of Scott's writing doesn't help with clarity either. Of course Zack, who writes tens of thousands of words to indirectly complain about perceived hypocrisy of Eliezer's in order to indirectly push gender essentialist views, probably finds Scott's writings a perfectly reasonable length.

Edit: oh and a added bonus on the Brennan Emails... Seeing them brought up again I connected some dots I had missed. I had seen (and sneered at) this Yud quote before:

I feel like it should have been obvious to anyone at this point that anybody who openly hates on this community generally or me personally is probably also a bad person inside and has no ethics and will hurt you if you trust them, but in case it wasn't obvious consider the point made explicitly.

But somehow I had missed or didn't realize the subtext was the emails that laid clear Scott's racism:

(Subtext: Topher Brennan. Do not provide any link in comments to Topher's publication of private emails, explicitly marked as private, from Scott Alexander.)

Hmm... I'm not sure to update (usage of rationalist lingo is deliberate and ironic) in the direction of "Eliezer is stubbornly naive on Scott's racism" or "Eliezer is deliberately covering for Scott's racism". Since I'm not a rationalist my probabilities don't have to sum to 1, so I'm gonna go with both.

view more: ‹ prev next ›

scruiser

joined 2 years ago