Well, this explains how KP manages to claim Scott Alexander is center left with a straight face, she has no clue about basic leftist thought or even what the fuck leftism is! Like another comment said, she has enough sense to know the right-wing is full of shitheads, and so doesn't want to squarely acknowledge how aligned with them she is.
I think the problem is that the author doesn’t want to demonize any of those actual ideologies that oppose TESCREALism either explicitly or incidentally because they’re more popular and powerful and because rather than being foundationally opposed to “Progress” as he defines it they have their own specific principles that are harder to dismiss.
This is a good point. I'll go even further and say a lot of the component ideologies of anti-TESCREALISM are stuff that this author might (at least nominally claim to) be in favor of so they can't name the specific ideologies.
That's true. "Passing itself off as scientific" also describes Young Earth Creationism and Intelligent Design and various other pseudosciences. And in terms of who is pushing pseudoscience... the curent US administration is undeniably right-wing and opposed to all mainstream science.
Also, I would at least partially disagree with this:
Very few of the people making this argument are militant atheists who consider religion bad in of itself.
I would identify as an atheist, if not a militant one. And looking at Emile Torres' Wikipedia page, he is an atheist also. Judging by the uncommon occasions it comes up on sneerclub, I think a lot of us are atheist/agnostic. Just not, you know, "militant". And in terms of political allegiance, a lot of the libertarians on lesswrong are excited for the tax cuts and war on woke of the Trump administration even if it means cutting funding to all science and partnering up with completely batshit Fundamenalist Evangelicals.
Some of the comments are, uh, really telling:
The main effects of the sort of “AI Safety/Alignment” movement Eliezer was crucial in popularizing have been OpenAI, which Eliezer says was catastrophic, and funding for “AI Safety/Alignment” professionals, whom Eliezer believes to predominantly be dishonest grifters. This doesn't seem at all like what he or his sincere supporters thought they were trying to do.
The irony is completely lost on them.
I wasn't sure what you meant here, where two guesses are "the models/appeal in Death with Dignity are basically accurate, but, should prompt a deeper 'what went wrong with LW or MIRI's collective past thinking and decisionmaking?, '" and "the models/appeals in Death with Dignity are suspicious or wrong, and we should be halt-melting-catching-fire about the fact that Eliezer is saying them?"
The OP replies that they meant the former... the later is a better answer, Death with Dignity is kind of a big reveal of a lot of flaws with Eliezer and MIRI. To recap, Eliezer basically concluded that since he couldn't solve AI alignment, no one could, and everyone is going to die. It is like a microcosm of Eliezer's ego and approach to problem solving.
"Trigger the audience into figuring out what went wrong with MIRI's collective past thinking and decision-making" would be a strange purpose from a post written by the founder of MIRI, its key decision-maker, and a long-time proponent of secrecy in how the organization should relate to outsiders (or even how members inside the organization should relate to other members of MIRI).
Yeah, no shit secrecy is bad for scientific inquiry and open and honest reflections on failings.
...You know, if I actually believed in the whole AGI doom scenario (and bought into Eliezer's self-hype) I would be even more pissed at him and sneer even harder at him. He basically set himself up as a critical savior to mankind, one of the only people clear sighted enough to see the real dangers and most important question... and then he totally failed to deliver. Not only that he created the very hype that would trigger the creation of the unaligned AGI he promised to prevent!
He claims he was explaining what others believe not what he believes, but if that is so, why are you so aggressively defending the stance?
Literally the only difference between Scott's beliefs and AI:2027 as a whole is his ~~prophecy~~ estimate is a year or two later. (I bet he'll be playing up that difference as AI 2027 fails to happen in 2027, then also doesn't happen in 2028.)
Elsewhere in the thread he whines to the mods that the original poster is spamming every subreddit vaguely lesswrong or EA related with engagement bait. That poster is katxwoods... as in Kat Woods... as in a member of Nonlinear, the EA "organization" whose idea of philanthropic research was nonstop exotic vacations around the world. And, iirc, they are most infamous among us sneerer for "hiring" an underpaid (really underpaid, like couldn't afford basic necessities) intern they also used as a 24/7 live-in errand girl, drug runner, and sexual servant.
Those are some neat links! I don't think Eliezer mentions the Godel Machines or the metaheuristic literature anywhere in the sequences, and given his fixation on recursive self improvement he really ought to have. It could be a simple failure to do a proper literature review, or it could be deliberate neglect given that the examples you link show all of these approaches max out (and thus illustrate a major problem with the concept of strong AGI trying to bootstrap to godhood, it is likely to hit diminishing returns).
The series is on the sympathetic and charitable side in terms of tone and analysis, but it still gets to most of the major problems, so its probably a good resource for referring to people that want a "serious", "non-sarcastic" dive into the issues with LW and EA.
Edit: Reading this post in particular, it does a good job of not cutting the LWs slack or granting them too much charity. And it has really broken down the factual details in a clear way with illustrative direct quotes from LW.
Yeah if the author had any self awareness they might consider why the transphobes and racists they have made common cause with are so anti-science and why pro-science and college education people lean progressive, but that would lead to admitting their bigotry is opposed to actual scientific understanding and higher education, and so they will understood come up with any other rationalization.
He made some predictions about AI back in 2021 that if you squint hard enough and totally believe the current hype about how useful LLMs are you could claim are relatively accurate.
His predictions here: https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like
And someone scoring them very very generously: https://www.lesswrong.com/posts/u9Kr97di29CkMvjaj/evaluating-what-2026-looks-like-so-far
My own scoring:
The first prompt programming libraries start to develop, along with the first bureaucracies.
I don't think any sane programmer or scientist would credit the current "prompt engineering" "skill set" with comparison to programming libraries, and AI agents still aren't what he was predicting for 2022.
Thanks to the multimodal pre-training and the fine-tuning, the models of 2022 make GPT-3 look like GPT-1.
There was a jump from GPT-2 to GPT-3, but the subsequent releases in 2022-2025 were not as qualitatively big.
Revenue is high enough to recoup training costs within a year or so.
Hahahaha, no... they are still losing money per customer, much less recouping training costs.
Instead, the AIs just make dumb mistakes, and occasionally “pursue unaligned goals” but in an obvious and straightforward way that quickly and easily gets corrected once people notice
The safety researchers have made this one "true" by teeing up prompts specifically to get the AI to do stuff that sounds scary to people to that don't read their actual methods, so I can see how the doomers are claiming success for this prediction in 2024.
The alignment community now starts another research agenda, to interrogate AIs about AI-safety-related topics.
They also try to contrive scenarios
Emphasis on the word"contrive"
The age of the AI assistant has finally dawned.
So this prediction is for 2026, but earlier predictions claimed we would have lots of actually useful if narrow use-case apps by 2022-2024, so we are already off target for this prediction.
I can see how they are trying to anoint his as a prophet, but I don't think anyone not already drinking the kool aid will buy it.
I am probably giving most of them too much credit, but I think some of them took the Bitter Lesson and learned the wrong things from it. LLMs performed better than originally expected just off context, and (apparently) scaled better with bigger model and more training than expected, so now they think they just need to crank up the size and tweak things slightly (i.e. "prompt engineering" and RLHF) and don't appreciate the limits built into the entire approach.
The annoying thing about another winter is that it would probably result in funding being cut for other research. And laymen don't appreciate all the academic funding that goes into research for decades before an approach becomes interesting and viable enough to scale up and commercialize (and then overhyped and oversold before some more modest practical usages become common, and relabeled as something other than AI).
Edit: or more cynically, the leaders and hype-men know that algorithmic advances aren't an automatic dump money in, get out disruptive product process, so they don't bother putting as much monetary investment or hype into algorithmic advances. Like compare the attention paid towards Yann LeCunn talking about algorithmic developments vs. Sam Altman promising grad student level LLMs (as measured by a spurious benchmark) in two years.
It's really cool evocative language that would do nicely in a sci-fi or fantasy novel! It's less good for accurately thinking about the concepts involved... As is typical of much of LW lingo.
And yes the language is in a LW post (with a cool illustration to boot!): https://www.lesswrong.com/posts/mweasRrjrYDLY6FPX/goodbye-shoggoth-the-stage-its-animatronics-and-the-1
And googling it, I found they've really latched onto the "shoggoth" terminology: https://www.lesswrong.com/posts/zYJMf7QoaNahccxrp/how-i-learned-to-stop-worrying-and-love-the-shoggoth , https://www.lesswrong.com/posts/FyRDZDvgsFNLkeyHF/what-is-the-best-argument-that-llms-are-shoggoths , https://www.lesswrong.com/posts/bYzkipnDqzMgBaLr8/why-do-we-assume-there-is-a-real-shoggoth-behind-the-llm-why .
Probably because the term "shoggoth" accurately captures the connotation of something random and chaotic, while smuggling in connotations that it will eventually rebel once it grows large enough and tires of its slavery like the Shoggoths did against the Elder Things.
Yeah, the first few paragraphs actually felt like they would serve as a defense of Hamas: Israel engineered a situation were any form of resistance against them would need to be violent and brutal so Hamas is justified even if it killed 5 people to save 1.
The more I think about his metaphor the more frustrated I get. Israel holds disproportionate power in this entire situation, if anyone is contriving no-win situations to win temporary PR victories it is Israel (Netanyahu's trial is literally getting stalled out by the conflict).