I'm fascinated by the way they're hyping up Daniel Kokotajlo to be some sort of AI prophet. Scott does it here, but so does Caroline Jeanmaire in the OP's twitter link. It's like they all got the talking point (probably from Scott) that Daniel is the new guru. Perhaps they're trying to anoint someone less off-putting and awkward than Yud. (This is also the first time I've ever seen Scott on video, and he definitely gives off a weird vibe.)
After minutes of meticulous research and quantitative analysis, I've come up with my own predictions about the future of AI.
"USG gets captured by AGI".
Promise?
Of course they use shitty AI slop as the background for their web page.
Like, what the hell is it even supposed to be? A mustachioed man writing in a journal in what appears to be a French village town square? Shadowy individuals chatting around an oddly incongruous fire pit? Guitar dude and listener sitting on invisible benches? I get that AI produces this kind of garbage all the time, but did the lesswrongers even bother to evaluate it for appropriateness?
This commenter may be saying something we already knew, but it's nice to have the confirmation that Anthropic is chock full of EAs:
(I work at Anthropic, though I don't claim any particular insight into the views of the cofounders. For my part I'll say that I identify as an EA, know many other employees who do, get enormous amounts of value from the EA community, and think Anthropic is vastly more EA-flavored than almost any other large company, though it is vastly less EA-flavored than, like, actual EA orgs. I think the quotes in the paragraph of the Wired article give a pretty misleading picture of Anthropic when taken in isolation and I wouldn't personally have said them, but I think "a journalist goes through your public statements looking for the most damning or hypocritical things you've ever said out of context" is an incredibly tricky situation to come out of looking good and many of the comments here seem a bit uncharitable given that.)
Sorry, when she started taking Yud's claims to be a "renowned AI researcher" at face value, I noped out.
So now Steve Sailer has shown up in this essay's comments, complaining about how Wikipedia has been unfairly stifling scientific racism.
Birds of a feather and all that, I guess.
why it has to be quite that long
Welcome to the rationalist-sphere.
Scott Alexander, by far the most popular rationalist writer besides perhaps Yudkowsky himself, had written the most comprehensive rebuttal of neoreactionary claims on the internet.
Hey Trace, since you're undoubtedly reading this thread, I'd like to make a plea. I know Scott Alexander Siskind is one of your personal heroes, but maybe you should consider digging up some dirt in his direction too. You might learn a thing or two.
You know the doom cult is having an effect when it starts popping up in previously unlikely places. Last month the socialist magazine Jacobin had an extremely long cover feature on AI doom, which it bought into completely. The author is an effective altruist who interviewed and took seriously people like Katja Grace, Dan Hendrycks and Eliezer Yudkosky.
I used to be more sanguine about people's ability to see through this bullshit, but eschatological nonsense seems to tickle something fundamentally flawed in the human psyche. This LessWrong post is a perfect example.
Imagine thinking there is actually some identifiable thing called "white culture". As if a skin color defines a culture.
Yeah, sounds like a Nazi.
Scott talks a bit about it in the video, but he was recently in the news as the guy who refused to sign a non-disparagement agreement when he left OpenAI, which caused them to claw back his stock options.