The irony being that rather than writing good SCP themselves, they'd make for even better subjects of containment & study.
Wow, the term 'epistemically humble' is ingenious really. I don't have to listen to any critics at all, not because I'm a narcissist, oh no, but because I'm so 'epistemically humble' no one could possibly have anything left to teach me!
How much money would be saved by just funneling the students of these endless ‘AI x’ programs back to the humanities where they can learn to write (actually good) science fiction to their heart’s content? Hey, finally a way AI actually lead to some savings!
That’s an AI governance PhD right there!
Yeah, I suppose academia and the tech industry are quite different things, for all their problems.
Still, there’s a way to critique systemic issues without mixing it up with self-aggrandizement and the implication that all your coworkers except a few friends are idiots. He really reminds me of Nassim Taleb in that regard, who (among other things) has made some valid criticisms of IQ but whose style is just a bit too much for my sensibilities. ‘Benevolent griftiness’ seems just the right descriptor here. =)
It’s also telling how he shies away from bringing his line of thought to its logical conclusion: if you think you need to “optimize” your child’s genetics to perfection, why shouldn’t you try to optimize their environment like that as well? If you’re such an imperfect being with all your faulty genes after all then it’s probable you will make mistakes during parenting, so by your own logic thinking you would be suited to raise a child in the first place is a terrible crime no different from refusing cuckoldry.
And they call this “effective altruism”. Jesus Christ these people need help.
- THE BASILISK WILL COME FOR YOU ALL
Where did you get that impression from? He says himself he is not advocating against aid per se, but that its effects should be judged more holistically, e.g. that organizations like GiveWell should also include the potential harms alongside benefits in their reports. The overarching message seems to be one of intellectual humility – to not lose sight that the ultimate aim is to help another human being who in the end is a person with agency just like you, not to feel good about yourself or to alleviate your own feelings of guilt.
The basic conceit of projects like EA is the incredible high of self-importance and moral superiority one can get blinded by when one views themselves as more important than other people by virtue of helping so many of them. No one likes to be condescended to; sure, a life saved with whatever technical fix is better than a life lost, but human life is about so much more than bare material existence – dignity and freedom are crucial to a good life. The ultimate aim should be to shift agency and power into the hands of the powerless, not to bask in being the white knight trotting around the globe, saving the benighted from themselves.
This is a long but great read that gets to the very human follies behind the hyper-rational exterior of EA. Highly recommended!
Not Just zhe Autobahn, but zhe Highest Altruismus: Zhe Effective Altruist Case für Replacing Degenerate Stock vith Herrenvolk
I got introduced to the genre through Star Trek and I always found its moral vision, in addition to all the weekly alien weirdness & how it was approached with patient curiosity, strongly appealing. Roddenberry set out to create an explicit alternative to the impoverished perspectives of the Cold War era. The Prime Directive is non-interventionist to a fault.
Look, you may think people should simply aim to treat others well, but I, a scientist, need a citation for that.