Yeah, allowing the framing that blog post uses is already conceding a lot to EA and overlooking the bigger problems they have.
Yeah I think long term Trump wrecking US soft power might be good for the world. There is going to be a lot of immediate suffering because a lot of those programs were also doing good things (in addition to strengthening US soft power or pushing a neocolonial agenda or whatever else).
I'd add:
-
examples of problems equivalent to the halting problem, examples of problems that are intractable
-
computational complexity. I.e. Schrodinger Equation and DFT and why the ASI can't invent new materials/nanotech (if it was even possible in the first place) just by simulating stuff really well.
titotal has written some good stuff on computational complexity before. Oh wait, you said you can do physics so maybe you're already familiar with the material science stuff?
So... on strategies for explaining to normies, a personal story often grabs people more than dry facts, so you could focus on the narrative of Eliezer trying big idea, failing or giving up, and moving on to bigger ideas before repeating (stock bot to seed AI to AI programming language to AI safety to shut down all AI)? You'll need the wayback machine, but it is a simple narrative with a clear pattern?
Or you could focus on the narrative arc of someone that previously bought into less wrong? I don't volunteer, but maybe someone else would be willing to take that kind of attention?
I took a stab at both approaches here: https://awful.systems/comment/6885617
Bonus: a recent comment is skeptical:
well, how do I play democracy with AI? It’s already 2025
Oh no, its much more than a single piece of fiction, it's like an entire mini genre. If you're curious...
A short story... where the humans are the AI! https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message Its meant to suggest what could be done with arbitrary computational power and time. Which is Eliezer's only way of evaluating AI, by comparing it to the fictional version with infinite compute inside of his head. Expanded into a longer story here: https://alicorn.elcenia.com/stories/starwink.shtml
Another parable by Eliezer (the genie is blatantly an AI): https://www.lesswrong.com/posts/ctpkTaqTKbmm6uRgC/failed-utopia-4-2 Fitting that his analogy for AI is a literal genie. This story also has some weird gender stuff, because why not!
One of the longer ones: https://www.fimfiction.net/story/62074/friendship-is-optimal A MLP MMORPG AI is engineered to be able to bootstrap to singularity. It manipulates everyone into uploading into it's take on My Little Pony! The author intended it as a singularity gone subtly wrong, but because they posted it to both a MLP fan-fiction site in addition to linking it to lesswrong, it got an audience that unironically liked the manipulative uploading scenario and prefers it to real life.
Gwern has taken a stab at it: https://gwern.net/fiction/clippy We made fun of Eliezer warning about watching the training loss function, in this story the AI literally hacks it way out in the middle of training!
And another short story: https://www.lesswrong.com/posts/AyNHoTWWAJ5eb99ji/another-outer-alignment-failure-story
So yeah, it an entire genre at this point!
Short fiction of AGI takeover is a lesswrong tradition! And some longer fics too! Are you actually looking for specific examples and/or links? Lots of them are fun, in a sci-fi short form kind of way. The goofier ones and cringer ones are definitely sneerable.
It turns out there is a level of mask-off that makes EAs react with condemnation! It's somewhere past the point where the racist is comparing pronouns to genocide, but it exists!
Clearly you need to go up a layer of meta to see the parallels, you aren't a high enough decoupler!
/s just in case, because that's exactly how they would defend insane analogies.
I think this is the first mention of the Brennan email on LW?
That is actually kind of weird... Did the lesswrong mods deliberately censor all discussion of the emails? (Out of a misplaced sense of respect for what gets the privilege of privacy? Or deliberately covering up the racism? Or the later disguised as the former?) They seem foundational to understanding Scott's true motives, it seem like the emails should have at least warranted a tangential mention. Trying to clear this up... but searching for Brennan doesn't help as an original fiction character has that name and searching for emails doesn't help as it gets the Bostrom emails.
I kinda agree with, but the post does correctly point out the Eliezer ignored a lot of the internal distinctions between philosophical positions and ignored how the philosophers use their own terminology. So even though I also think p-zombies are ultimately an incoherent thought experiment I don't think Eliezer actually did a good job addressing them.
Yeah I pretty much agree. Penrose compares favorably to other cases of noble disease because the bar is so low (the Wikipedia page has got examples of racism, eugenics, homeopathy, astrology), not because his ideas about Quantum consciousness are actually good. It's not good to cite Penrose as someone notable who disagrees with the possibility of AGI because the reason he disagree is because he believes in Quantum mysticism and misunderstands Godel’s theorem and computer science.