i'm not sure if AI is a bubble that will pop no matter if we do or don't get imminent AGI so probably the 2nd one but he still has a while longer of having to play the rational intellectual
LW are the fundamentalist baptists of AI not even Russian Orthodox lol
Everytime I get freaked out by AI doom posts on twitter they're always coming from a LW goon who's street preaching about how we need to count our Christmases :< i just saw one that got my nerves on edge and checked their account and they had "printed HPMOR" in their bio and I facepalmed
True but specifically was referring to researchers since most of the researchers repping extinction risk are LW or yud influenced (Musk, Hinton, etc)
Kinda interesting that it's focused on smaller scale risks like malicious data instead of ahhh extinction ahhh
Is the whole x risk thing as common outside of North America? Realizing I've never seen anyone from outside the anglosphere or even just America/Canada be as God killingly Rational as the usual suspects
The funniest thing was the "reasons that this thesis might not be true" and the reasons were infinitely simpler and arguably stronger than the points for it that bordered on schizophrenic like: "We don't live in a simulation" and "we won't create a paperclip maximizer"