This brings to mind my favourite podcast about black conspiracy theories: My Momma Told Me. They discuss Yakub and Oprah in equal measure.
Oh yes thanks for the reminder.
quirk-washing TREACLES
I can’t wait to be quirk-washed, I’m ready to hang up my pick-me hat and let the new yorker do the work for me
Oh, we value logic, you’re just bad at it.
I suspect a large portion of people in EA leadership were already on the latter train and posturing as the former. The former is actually kinda problematic in its own way! If a problem was solvable purely by throwing money at it, then what is the need for a charity at all?
In the "Rationalist Apologetic Overtures" skill tree we got:
- Denying wrongdoing/incorrectness (cantrip)
- Accusing the other side of bad faith (cantrip)
- Mentioning own IQ (cantrip)
- Non apology (1st level) (e.g. I'm sorry you feel that way)
- Empty apology (3rd level)
- Insincere apology (5th level)
- Acknowledgement of individual experience outside of one's own (7th level)
- Admission of wrongdoing/incorrectness (9th level)
- Genuine guilt (11th level)
- Actual complete apology (13th level)
- Admitting the other person is right (15th level)
Read the whole damn thing. Near the end:
One of the last times I spoke to Scott, before the Turkey-Shoot Clusterfuck began, his mother had been in the hospital half a dozen times in recent weeks. She is in her seventies and has a thyroid condition, but on a recent visit to the E.R. she waited nearly seven hours, and left without being seen by a doctor. “The right Copilot could have diagnosed the whole thing, and written her a prescription within minutes,” he said. But that is something for the future. Scott understands that these kinds of delays and frustrations are currently the price of considered progress—of long-term optimism that honestly contends with the worries of skeptics.
Either Scott is swimming in an olympic sized pool of AI kool-aid and constantly thinking about how else AI can invade aspects of his personal life or just a normal exec that is willing to cynically spin every aspect of his personal life in service of the grift. It’s probably just the latter.
For my research, the primary speedups from AI come from using chatGPT to speed up coding a bit and helping to write bureaucratic applications.
Lmfao
A meandering, low density of information, holier than thou, scientifically incorrect, painful to read screed that is both pro and anti AI, in the form of a dialogue for some reason? Classic Yud.
Tangent to your point- what would happen if we started misusing tescreal terms to dilute their meaning? Some ideas:
“I don’t want to go to that party. It’s an x-risk.”
“No, I didn’t really like those sequel films. They were inscrutable Matrices.”
“You know, holding down the A button and never letting up is a viable strategy as long as you know how to brake and mini-turbo in Mario Kart. Look up ‘effective accelerationism’.”
Anyway I doubt it would do anything other than give us a headache from observing/using rat terms. Just wanted to have a lil fun.
I will answer these sincerely in as much detail as necessary. I will only do this once, lest my status amongst the sneerclub fall.
- I don't think this question is well-defined. It implies that we can qualify all the relevant domains and quantify average human performance in those domains.
- See above.
- I think "AI systems" already control "robotics". Technically, I would count kids writing code for a simple motorised robot to satisfy this. Everywhere up the ladder, this is already technically true. I imagine you're trying to ask about AI-controlled robotics research, development and manufacturing. Something like what you'd see in the Terminator franchise- Skynet takes over, develops more advanced robotic weapons, etc. If we had Skynet? Sure, Skynet formulated in the films would produce that future. But that would require us to be living in that movie universe.
- This is a much more well-defined question. I don't have a belief that would point me towards a number or probability, so no answer as to "most." There are a lot of factors at play here. Still, in general, as long as human labour can be replaced by robotics, someone will, at the very least, perform economic calculations to determine if that replacement should be done. The more significant concern here for me is that in the future, as it is today, people will still only be seen as assets at the societal level, and those without jobs will be left by the wayside and told it is their fault that they cannot fend for themselves.
- Yes, and we already see that as an issue today. Love it or hate it, the partisan news framework produces some consideration of the problems that pop up in AI development.
Time for some sincerity mixed with sneer:
I think the disconnect that I have with the AGI cult comes down to their certainty on whether or not we will get AGI and, more generally, the unearned confidence about arbitrary scientific/technological/societal progress being made in the future. Specifically with AI => AGI, there isn't a roadmap to get there. We don't even have a good idea of where "there" is. The only thing the AGI cult has to "convince" people that it is coming is a gish-gallop of specious arguments, or as they might put it, "Bayesian reasoning." As we say, AGI is a boogeyman, and its primary use is bullying people into a cult for MIRI donations.
Pure sneer (to be read in a mean, high-school bully tone):
Look, buddy, just because Copilot can write spaghetti less tangled than you doesn't mean you can extrapolate that to AGI exploring the stars. Oh, so you use ChatGPT to talk to your "boss," who is probably also using ChatGPT to speak to you? And that convinces you that robots will replace a significant portion of jobs? Well, that at least convinces me that a robot will replace you.
Ah, finally, a paper article to pin to my Roko’s basilisk/David Gerard/X Æ A-Xii red yarn corkboard