19
Eliezer complements Musk, Musk negs Eliezer
(nitter.net)
Hurling ordure at the TREACLES, especially those closely related to LessWrong.
AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)
This is sneer club, not debate club. Unless it's amusing debate.
[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]
ITT:
This is the future LWers want.
I'm reminded of a My Little Pony singularity fan-fiction (Friendship is Optimal) that I read back when I had poor taste. An AI for a pony MMORPG goes rogue and converts everyone into digital ponies to maximize happiness but with a pony theme. The victims live out impossibly long, but ultimately superficial, lives doing pony stuff and goodness gracious why is there such a weird relationship between rationalists and fanfiction writers.
most charitable psychoanalysis: projecting their sense of rationality onto a fictional world is a way to express a deep longing for rules and logic in an often cruelly irrational world
least charitable: their sense of rationality can only be true in a fictional world, so they want to live in that rather than reality
Neutral charity: the author is dead, all interpretation is essentially fanfiction, and since we are all individuals, all relationships with texts/fanfiction are weird.
the most euphemistic description yet of the cursed slab of ponyfucking
"I dig a pony ... Well, you can penetrate any place you go / Yes, you can penetrate any place you go / I told you so"
Whatever, I'll be a pony. Where do I sign up?
Pleasure Island, from Pinocchio. You gotta ask for the pony pass though, or else you’re just gonna get turned into a donkey. To reverse the transformation you gotta go to the island of Dr. Moreau.
It's the combination of big imaginations and little real-world experience. In Friendship is Optimal, the AGI goes from asking for more CPUs to asking for information on how to manufacture its own CPUs, somehow without involving the acquisition of silicon crystals or ASML hardware along the way. Rationalist writers imagine that AGI will somehow provide its own bounty of input resources, rather than participating in the existing resource economy.
In reality, none of our robots have demonstrated the sheer instrumentality required to even approach this sort of doomsday scenario. I think rationalists have a bit of the capitalist blind spot here, imagining that everything and everybody (and everypony!) is a resource.
they're both extremely online. next question