11
Death and the Gorgon - Greg Egan
(asimovs.com)
Hurling ordure at the TREACLES, especially those closely related to LessWrong.
AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)
This is sneer club, not debate club. Unless it's amusing debate.
[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]
See our twin at Reddit
I've avoided reading Greg Egan until like last year because I entirely expected him to be a cold stemlord shithead and people only talk about his earlier books that have more to do with consciousness and identity and stuff, which these days feels very zzzzz, but he is SO COOL and SO FUN!!! He cares in a deep way about people, lived experience, about societies, he loves physics and maths in themselves because they're beautiful and fun and not because they're ways to look smart or reveal the secrets of the universe, his books are very beautiful. Complete opposite of Yud, Scott, nostalgebraist (I have a grudge) et al.'s silly books.
@Amoeba_Girl @Soyweiser I read https://en.wikipedia.org/wiki/Permutation_City and found something about it seemed deeply wrong in ways that I had trouble articulating.
It's like when you see a bogus mathematical proof of a statement that you know to be false, but the mistake is hidden deep and you can't tell where it has gone wrong, you just know it has.
@bencurthoys @Amoeba_Girl @Soyweiser I'm pretty sure that about 10-20 years ago Egan came out with a serious repudiation of his own ideas about achieving AI through iterated simulations of less-intelligent entities: he noted that implementing it was implicitly genocidal (by murdering all entities that didn't *quite* meet some threshold set by the experimenters, you'd inevitably kill huge numbers of sentient beings just for failing an arbitrary test).
There's a fun/horrifying scene in Ken McLeod's Stone Canal where the protagonists revive superhuman intelligences from cold storage, get the answers they need from them, then destroy them with nanotech the superhumans have not developed defenses against. As one of them says when confronted: "standard programming practice, keep the source code, blow away the object code".
(It's partially justified that if left alone the superintelligences will just iteratively bootstrap themselves into catatonic insanity anyway)