1
14

the old one is three weeks old, let's start another

previous thread

2
10
submitted 5 hours ago* (last edited 4 hours ago) by scruiser@awful.systems to c/sneerclub@awful.systems

So, lesswrong Yudkowskian orthodoxy is that any AGI without "alignment" will bootstrap to omnipotence, destroy all mankind, blah, blah, etc. However, there has been the large splinter heresy of accelerationists that want AGI as soon as possible and aren't worried about this at all (we still make fun of them because what they want would result in some cyberpunk dystopian shit in the process of trying to reach it). However, even the accelerationist don't want Chinese AGI, because insert standard sinophobic rhetoric about how they hate freedom and democracy or have world conquering ambitions or they simply lack the creativity, technical ability, or background knowledge (i.e. lesswrong screeds on alignment) to create an aligned AGI.

This is a long running trend in lesswrong writing I've recently noticed while hate-binging and catching up on the sneering I've missed (I had paid less attention to lesswrong over the past year up until Trump started making techno-fascist moves), so I've selected some illustrative posts and quotes for your sneering.

  • Good news, China actually has no chance at competing at AI (this was posted before deepseek was released). Well. they are technically right that China doesn't have the resources to compete in scaling LLMs to AGI because it isn't possible in the first place

China has neither the resources nor any interest in competing with the US in developing artificial general intelligence (AGI) primarily via scaling Large Language Models (LLMs).

  • The Situational Awareness Essays make sure to get their Yellow Peril fearmongering on! Because clearly China is the threat to freedom and the authoritarian power (pay no attention to the techbro techno-fascist)

In the race to AGI, the free world’s very survival will be at stake. Can we maintain our preeminence over the authoritarian powers?

  • More crap from the same author
  • There are some posts pushing back on having an AGI race with China, but not because they are correcting the sinophobia or the delusions LLMs are a path to AGI, but because it will potentially lead to an unaligned or improperly aligned AGI
  • And of course, AI 2027 features a race with China that either the US can win with a AGI slowdown (and an evil AGI puppeting China) or both lose to the AGI menance. Featuring "legions of CCP spies"

Given the “dangers” of the new model, OpenBrain “responsibly” elects not to release it publicly yet (in fact, they want to focus on internal AI R&D). Knowledge of Agent-2’s full capabilities is limited to an elite silo containing the immediate team, OpenBrain leadership and security, a few dozen US government officials, and the legions of CCP spies who have infiltrated OpenBrain for years.

  • Someone asks the question directly Why Should I Assume CCP AGI is Worse Than USG AGI?. Judging by upvoted comments, lesswrong orthodoxy of all AGI leads to doom is the most common opinion, and a few comments even point out the hypocrisy of promoting fear of Chinese AGI while saying the US should race for AGI to achieve global dominance, but there are still plenty of Red Scare/Yellow Peril comments

Systemic opacity, state-driven censorship, and state control of the media means AGI development under direct or indirect CCP control would probably be less transparent than in the US, and the world may be less likely to learn about warning shots, wrongheaded decisions, reckless behaviour, etc. True, there was the Manhattan Project, but that was quite long ago; recent examples like the CCP's suppression of information related to the origins of COVID feel more salient and relevant.

3
22

I am still subscribed to slatestarcodex on reddit, and this piece of garbage popped up on my feed. I didn't actually read the whole thing, but basically the author correctly realizes Trump is ruining everything in the process of getting at "DEI" and "wokism", but instead of accepting the blame that rightfully falls on Scott Alexander and the author, deflects and blames the "left" elitists. (I put left in quote marks because the author apparently thinks establishment democrats are actually leftist, I fucking wish).

An illustrative quote (of Scott's that the author agrees with)

We wanted to be able to hold a job without reciting DEI shibboleths or filling in multiple-choice exams about how white people cause earthquakes. Instead we got a thousand scientific studies cancelled because they used the string “trans-” in a sentence on transmembrane proteins.

I don't really follow their subsequent points, they fail to clarify what they mean... In sofar as "left elites" actually refers to centrist democrats, I actually think the establishment Democrats do have a major piece of blame in that their status quo neoliberalism has been rejected by the public but the Democrat establishment refuse to consider genuinely leftist ideas, but that isn't the point this author is actually going for... the author is actually upset about Democrats "virtue signaling" and "canceling" and DEI, so they don't actually have a valid point, if anything the opposite of one.

In case my angry disjointed summary leaves you any doubt the author is a piece of shit:

it feels like Scott has been reading a lot of Richard Hanania, whom I agree with on a lot of points

For reference the ssc discussion: https://www.reddit.com/r/slatestarcodex/comments/1jyjc9z/the_edgelords_were_right_a_response_to_scott/

tldr; author trying to blameshift on Trump fucking everything up while keeping up the exact anti-progressive rhetoric that helped propel Trump to victory.

4
26

Might unilateral billionaire funding skew priorities?

Should the "epistemically humble" listen to people who disagree with them?

Might it be undemocratic to give some people many times more voting power?

Find out this week on 'Keeping up with the Effective Altruists'

5
30
Moldbug has a sad (www.thenerdreich.com)

Apparently DOGE isn’t killing enough people (literally or metaphorically)

6
41
7
34

A nice and solid mockery of just how badly e/accs derailed their own plans by getting Trump elected. I'll let the subtitle(?) speak for itself:

Effective accelerationists didn’t just accidentally shoot themselves in the foot. They methodically blew off each of their toes with a .50 caliber sniper rifle.

8
27

Came across this fuckin disaster on Ye Olde LinkedIn by 'Caroline Jeanmaire at AI Governance at The Future Society'

"I've just reviewed what might be the most important AI forecast of the year: a meticulously researched scenario mapping potential paths to AGI by 2027. Authored by Daniel Kokotajlo (>lel) (OpenAI whistleblower), Scott Alexander (>LMAOU), Thomas Larsen, Eli Lifland, and Romeo Dean, it's a quantitatively rigorous analysis beginning with the emergence of true AI agents in mid-2025.

What makes this forecast exceptionally credible:

  1. One author (Daniel) correctly predicted chain-of-thought reasoning, inference scaling, and sweeping chip export controls one year BEFORE ChatGPT existed

  2. The report received feedback from ~100 AI experts (myself included) and earned endorsement from Yoshua Bengio

  3. It makes concrete, testable predictions rather than vague statements that cannot be evaluated

The scenario details a transformation potentially more significant than the Industrial Revolution, compressed into just a few years. It maps specific pathways and decision points to help us make better choices when the time comes.

As the authors state: "It would be a grave mistake to dismiss this as mere hype."

For anyone working in AI policy, technical safety, corporate governance, or national security: I consider this essential reading for understanding how your current work connects to potentially transformative near-term developments."

Bruh what is the fuckin y axis on this bad boi?? christ on a bike, someone pull up that picture of the 10 trillion pound baby. Let's at least take a look inside for some of their deep quantitative reasoning...

....hmmmm....

O_O

The answer may surprise you!

9
24
submitted 2 weeks ago* (last edited 2 weeks ago) by dgerard@awful.systems to c/sneerclub@awful.systems

Thinking about how the arsing fuck to explain the rationalists to normal people - especially as they are now a loud public problem along multiple dimensions.

The problem is that it's all deep in the weeds. Every part of it is "it can't be that stupid, you must be explaining it wrong."

With bitcoin, I have, over the years, simplified it to being a story of crooks and con men. The correct answer to "what is a blockchain and how does it work" is "it's a way to move money around out of the sight of regulators" and maybe "so it's for crooks and con men, and a small number of sincere libertarians" and don't even talk about cryptography or technology.

I dunno what the one sentence explanation is of this shit.

"The purpose of LessWrong rationality is for Yudkowsky to live forever as an emulation running on the mind of the AI God" is completely true, is the purpose of the whole thing, and is also WTF.

Maybe that and "so he started what turned into a cult and a series of cults"? At this point I'm piling up the absurdities again.

The Behind The Bastards approach to all these guys has been "wow these guys are all so wacky haha and also they're evil."

How would you first approach explaining this shit past "it can't be that stupid, you must be explaining it wrong"?

[also posted in sneer classic]

10
16
submitted 2 weeks ago* (last edited 2 weeks ago) by dgerard@awful.systems to c/sneerclub@awful.systems

yeah i'm sure Matt Levine, qntm and Wildbow are gonna be champing at the bit to attend wordy racist fest

11
41
12
14
13
31
submitted 3 weeks ago* (last edited 3 weeks ago) by jaschop@awful.systems to c/sneerclub@awful.systems

I haven't watched it. I don't know how well she will cover the subject or how deep the rabbit hole she will venture.

All I know is she's delightful and I sure as hell won't read that bilge myself, so I'm looking forward to an entertaining summary.

Edit: I watched it. I had a good time.

14
10
15
16

A solid piece on AI and ethics (and the general lack of them), featuring a nice sideswipe at our very good friends.

16
69
submitted 1 month ago* (last edited 1 month ago) by dgerard@awful.systems to c/sneerclub@awful.systems

While this linear model's overall predictive accuracy barely outperformed random guessing,

I was tempted to write this up for Pivot but fuck giving that blog any sort of publicity.

the rest of the site is a stupendous assortment of a very small field of focus that made this ideal for sneerclub and not just techtakes

17
21
submitted 1 month ago* (last edited 1 month ago) by o7___o7@awful.systems to c/sneerclub@awful.systems

By Timnit Gebru and Emile P. Torres

Pro-tier sneers by seasoned veterans, get em while they're hot!

Edit: I am reliably informed that it is no longer hot.

18
29
19
18

some of the sub’s friends are holding a conference although they’re still not totally comfortable to go public:

but buyers are warned that purchases will “require approval”

aww, the poor babies. even with literal nazis in the whitehouse they still feel uncomfortable to spout their weird shit

hopefully if this thing happens at all, someone documents the everliving hell out of every attendee

20
52
21
44

Sneerclubbers may recall a recent encounter with "Tracing Woodgrains", née Jack Despain Zhou, the rationalist-infatuated former producer and researcher for "Blocked and Reported", a podcast featuring prominent transphobes Jesse Singal and Katie Herzog.

It turns out he's started a new venture: a "think-tank" called the "Center for Educational Progress." What's this think-tank's focus? Introducing eugenics into educational policy. Of couse they don't put it in those exact words, but that's the goal. The co-founder of the venture is Lillian Tara, former executive director of Pronatalist.org, the outfit run by creepy Harry Potter look-a-likes (and moderately frequent topic in this forum) Simone and Malcolm Collins. According to the anti-racist activist group Hope Not Hate:

The Collinses enlisted Lillian Tara, a pronatalist graduate student at Harvard University. During a call with our undercover reporter, Tara referred three times to her work with the Collinses as eugenics. “I don’t care if you call me a eugenicist,” she said.

Naturally, the CEP is concerned about IQ and want to ensure that mentally superior (read white) individuals don't have their hereditarily-deserved resources unfairly allocated to the poors and the stupids. They have a reading list on the substack, which includes people like Arthur Jensen and LessWrong IQ-fetishist Gwern.

So why are Trace and Lillian doing this now? I suppose they're striking while the iron is hot, probably hoping to get some sweet sweet Thiel-bucks as Elon and his goon-squad do their very best to gut public education.

And more proof for the aphorism: "Scratch a rationalist, find a racist".

22
16
submitted 2 months ago by Emperor@feddit.uk to c/sneerclub@awful.systems
23
26

the comments are a delight also

24
36
25
54

so i stumbled upon a thing on reddit

the thing is that there's an obscure early scifi book by none other than wernher von braun, that is about mars colonization, where they find already existing civilization, leader of which is called elon. apparently this is why megaracist elon's father named him that:

Interest in this novel increased in 2021 when people connected the Martian leader, called the Elon, to SpaceX founder Elon Musk, suggesting that von Braun may have somehow foreseen Musk's space exploration ventures.[15] However, Errol Musk, Elon's father, asserted in 2022 that he was fully aware of the von Braun connection in naming his son.[16]

also in that book: tunnels used for high-speed travel; nation-states became obsolete because of some magic tech; highly technocratic planetary government as a result. that stimulant habit seems historically accurate then, even if it's cut with ketamine sometimes. some more red string on corkboard https://www.mind-war.com/p/the-elon-how-a-nazi-rocket-scientist this tracks as one of his grandparents moved from canada to south africa because canada in 40s wasn't racist enough for them, and with all the technocracy inc things.

so yeah, motherfucker might be believing - or even groomed into - that he's destined to be a planetary overlord, all based on nazi scifi, and he's playing it out irl with all the fuck you money he has

view more: next ›

SneerClub

1083 readers
12 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS