I went deep into the Yud lore once. A single fluke SAT score served as the basis for Yud's belief in his own world-changing importance. In middle school, he took an SAT with a score of 670 verbal and 740 math (maximum 800 each) and the Midwest Talent Search contacted him to tell him that his scores were very high for a middle schooler. Despite his great pains to talk about how he tried to be humble about it, he also says that he was in the "99.9998th percentile" and "not only bright but waayy out of the ordinary."

I was in the math contest scene. I have good friends who did well on AP Calculus in middle school, and were skilled enough at contests that they would have easily gotten an 800 on the math SAT if they took it. Even so, there were middle schoolers who were far more skilled than them, and I have seen other people who were far less "talented" in middle school rise to great heights later in life. As it turns out, skills can be developed through practice.

Yud's performance would not even be considered impressive in the math contest community, let alone justify calling him one of the most important people in the world. Perhaps at the time, he didn't know better. But he decided to make this a core part of his self-identity. His life quickly spiraled out of control, starting with him refusing to attend high school.

[-] lagrangeinterpolator@awful.systems 18 points 3 weeks ago* (last edited 3 weeks ago)

It is how professors talk to each other in ... debate halls? What the fuck? Yud really doesn't have any clue how universities work.

I am a PhD student right now so I have a far better idea of how professors talk to each other. The way most professors (in math/CS at least) communicate in a spoken setting is through giving talks at conferences. The cool professors use chalkboards, but most people these days use slides. As it turns out, debates are really fucking stupid for scientific research for so many reasons.

  1. Science assumes good faith out of everyone, and debates are needlessly adversarial. This is why everyone just presents and listens to talks.
  2. Debates are actually really bad for the kind of deep analysis and thought needed to understand new research. If you want to seriously consider novel ideas, it's not so easy when you're expected to come up with a response in the next few minutes.
  3. Debates generally favor people who use good rhetoric and can package their ideas more neatly, not the people who really have more interesting ideas.
  4. If you want to justify a scientific claim, you do it with experiments and evidence (or a mathematical proof when applicable). What purpose does a debate serve?

I think Yud's fixation on debates and "winning" reflects what he thinks of intellectualism. For him, it is merely a means to an end. The real goal is to be superior and beat up other people.

Choice quote from Dave Karpf:

Policy moderation can never fail. It can only be failed.

It is important to update your beliefs with new information and listen to criticism from people who may disagree with you. But never listen to those SneerClub guys! Their non-Rational sneering will corrupt your bodily fluids!

At the same time, they constantly complain about OpenAI screwing them over with rerouting to GPT5. I don't know how to tell them this, but OpenAI is starting to realize that maybe lighting mountains of cash on fire is actually bad.

The saddest part is that they are extremely defensive about all this. The entire subreddit is restricted so nobody can post without moderator approval, and so many posts there constantly reference haters and trolls (like this one). Yeah sure, anything like this will attract a lot of trolls, but this is a perfect pretense for censoring legitimate concerns. Many of these people encourage others to fall deeper into the hole with reasonable-sounding arguments, and they never see any pushback because all of it has been censored.

Feeding the output of the AI back into itself? Nothing could possibly go wrong with that!

On one side, we have a trolley problem thought experiment involving hypothetical children tied to hypothetical train tracks and some people sending him rude emails. On the other side, we have actual dead children and actual hospitals and apartments reduced to rubble. I wonder which side is more convincing to me?

It's the same pattern of thought as rationalists with AI, trying to fit everything they see into their apocalypse narrative while ignoring the real harms. Rationalists talk a good game about evidence, but what I see them do in practice is very different. First, use mental masturbation (excuse me, "first principles") to arrive at some predetermined edgy narrative, and then cherry pick and misinterpret all evidence to support it. It is very important that the narratives are edgy, otherwise what are we even writing 10,000 word blog posts for?

I have a lot to say about Scott, being that I used to read his blog frequently and it affected my worldview. This blog title is funny. It was quite obvious that he at least entertained, if not outright supported, rationalists for a long time.

For me, the final break came when he defended SBF. One of his defenses was that SBF was a nerd, so he couldn't have had bad intentions. I share a lot of background with both SBF and Scott (we all did a lot of math contests in high school), but even I knew that it's not remotely an excuse for stealing billions of dollars.

I feel like a lot of his worldview centers around nerds vs everyone else. There's this archetype of nerds being awkward, but well-intentioned and smart people who can change the world. They know better than everyone else on how to improve the world, so they should be given as much power as possible. I now realize that this cultural conception of a nerd actually has very little to do with how smart or well-intentioned you really are. The rationalists aren't very good at technical matters (experts in an area can easily spot their errors), but they pull off this culture very well.

Recently, I watched a talk by Scott, where he mentioned an anecdote when he was at OpenAI. Ilya Sutskever asked him to come up with a formal, mathematical definition to describe if "an AI loves humanity". That actually pissed me off. I thought, can we even define if a human loves humanity? Yeah, surely all the literature, art, and music in the world is unnecessary now, we've got a definition right here!

If there's one thing I've learned from all this, it's that actions speak louder than any number of 10,000 word blog posts. Perhaps the rationalists could stop their theorycrafting for once and, you know, look at what Sam Altman and friends are actually doing.

lagrangeinterpolator

joined 8 months ago