You betcha it is. The lab leak theory (with added fear over gain of function research analogized with AGI research) conspiracy mongering is a popular "viewpoint" on lesswrong, aided, as typical, by the misapplication of bayes theorem, and dunning-kruger misreading of the "evidence".
That sounds like actual leftism, so no they really don't have the slightest inkling, they still think mainstream Democrats are leftist (and Democrats with some traces of leftism like Bernie or AOC are radical extremist leftists).
Yeah if the author had any self awareness they might consider why the transphobes and racists they have made common cause with are so anti-science and why pro-science and college education people lean progressive, but that would lead to admitting their bigotry is opposed to actual scientific understanding and higher education, and so they will understood come up with any other rationalization.
Big effort post... reading it will still be less effort than listening to the full Behind the Bastards podcast, so I hope you appreciate it...
To summarize it from a personal angle...
In 2011, I was a high schooler who liked Harry Potter fanfics. I found Harry Potter And The Methods of Rationality a fun story, so I went to the lesswrong website and was hooked on all the neat pop-science explanations. The AGI stuff and cryonics and transhumanist stuff seemed a bit fanciful but neat (after all, the present would seem strange and exciting to someone from a hundred years ago). Fast forward to 2015, HPMOR was finally finishing, I was finishing my undergraduate degree, and in the course of getting a college education I had actually taken some computer science and machine learning courses. Reconsidering lesswrong with my level of education then... I noticed MIRI (the institute Eliezer founded) wasn't actually doing anything with neural nets, they were playing around with math abstractions, and they hadn't actually published much formal writing (well not actually any, but at the time I didn't appreciate peer-review vs. self publishing and preprints), and even the informal lesswrong posts had basically stopped. I had gotten into a related blog, slatestarcodex (written by Scott Alexander), which filled some of the same niche, but in 2016 Scott published a defense of Trump normalizing him, and I realized Scott had an agenda at cross purposes with the "center-left" perspective he portrayed himself as. At around that point, I found the reddit version of sneerclub and it connected a lot of dots I had been missing. Far from the AI expert he presented himself as, Eliezer had basically done nothing but write loose speculation on AGI and pop-science explanations. And Scott Alexander was actually trying to push "human biodiversity" (i.e. racism disguised in pseudoscience) and neoreactionary/libertarian beliefs. From there, it became apparent to me a lot of Eliezer's claims weren't just a bit fanciful, they were actually really really ridiculous, and the community he had setup had a deeply embedded racist streak.
To summarize it focusing on Eliezer....
Late 1990s Eliezer was on various mailing lists, speculating with bright eyed optimism about nanotech and AGI and genetic engineering and cryonics. He tried his hand at getting in on it, first trying to write a stock trading bot... which didn't work, then trying to write up seed AI (AI that would bootstrap to strong AGI and change the world)... which also didn't work; then trying to develop a new programming language for AI... which he never finished. Then he realized he had been reckless, an actually successful AI might have destroyed mankind, so really it was lucky he didn't succeed, he needed to figure out how to align an AI first. So from the mid 2000s on he started getting donors (this is where Thiel comes in) to fund his research. People kind of thought he was a crank, or just didn't seem concerned with his ideas, so he concluded they must not be rational enough, and set about, first on Overcoming bias, then his own blog, lesswrong, writing a sequence of blog posts to fix that (and putting any actual AI research on hold). They got moderate attention which exploded in the early 2010s when a side project of writing Harry Potter fanfiction took off. He used this fame to get more funding and spread his ideas further. Finally, around mid 2010s, he pivoted to actually trying to do AI research again... MIRI has a sparse (compared to number of researchers they hired and how productive good professors in academia are) collection of papers focused on an abstract concept for AI called AIXI, that basically depends on having infinite computing power and isn't remotely implementable in the real world. Last I checked they didn't get any further than that. Eliezer was skeptical of neural network approaches, derisively thinking of them as voodoo science trying to blindly imitate biology with no proper understanding, so he wasn't prepared for NN taking off mid 2012 and leading to GPT and LLM approaches. So when ChatGPT starts looking impressive, he starts panicking, leading to him going on a podcast circuit professing doom (after all if he and his institute couldn't figure out AI alignment, no one can, and we're likely all doomed for reasons he has written tens of thousands of words in blog posts about without being refuted at a quality he believes is valid).
To tie off some side points:
-
Peter Thiel was one of the original funders of Eliezer and his institution. It was probably a relatively cheap attempt to buy reputation, and it worked to some extent. Peter Thiel has cut funding since Eliezer went full doomer (Thiel probably wanted Eliezer as a silicon valley hype man, not an apocalypse cult).
-
As Scott continued to write posts defending the far-right with a weird posture of being center-left, Slatestarcodex got an increasingly racist audience, culminating in a spin-off forum with full on 14 words white supremacists. He has played a major role in the alt-right pipeline that is some of Trump's most loyal supporters.
-
Lesswrong also attracted some of the neoreactionaries (libertarian wackjobs that want a return to monarchy), among them Menicus Moldbug (real name Curtis Yarvin). Yarvin has written about strategies for dismantling the federal government, which DOGE is now implementing
-
Eliezer may not have been much of a researcher himself, but he inspired a bunch of people, so a lot of OpenAI researchers buy into the hype and/or doom. Sam Altman uses Eliezer's terminology as marketing hype.
-
As for lesswrong itself... what is original isn't good and what's good isn't original. Lots of the best sequences are just a remixed form of books like Kahneman's "Thinking, Fast and Slow". And the worst sequences demand you favor Eliezer's take on bayesianism over actual science, or are focused on the coming AI salvation/doom.
-
other organizations have taken on the "AI safety" mantle. They are more productive than MIRI, in that they actually do stuff with actually implemented 'AI', but what they do is typically contrive (emphasis on contrive) scenarios where LLMs will "act" "deceptive" or "power seeking" or whatever scary buzzword you can imagine and then publish papers about it with titles and abstracts that imply the scenarios are much more natural than they really are.
Feel free to ask any follow-up questions if you genuinely want to know more. If you actually already know about this stuff and are looking for a chance to evangelize for lesswrong or the coming LLM God, the mods can smell that out and you will be shown the door, so don't bother (we get one or two people like that every couple of weeks).
We're already behind schedule, we're supposed to have AI agents in two months (actually we were supposed to have them in 2022, but ignore the failed bits of earlier prophecy in favor of the parts you can see success for)!
I think Eliezer has still avoided hard dates? In the Ted talk, I distinctly recall he used the term "0-2 paradigm shifts" so he can claim prediction success for stuff LLMs do, and paradigm shift is vague enough he could still claim success if its been another decade or two and there has only been one more big paradigm shift in AI (that still fails to make it AGI).
Is this the corresponding lesswrong post: https://www.lesswrong.com/posts/TpSFoqoG2M5MAAesg/ai-2027-what-superintelligence-looks-like-1 ?
Committing to a hard timeline at least means making fun of them and explaining how stupid they are to laymen will be a lot easier in two years. I doubt the complete failure of this timeline will actually shake the true believers though. And the more experienced ~~grifters~~ forecasters know to keep things vaguer so they will be able to retroactively reinterpret their predictions as correct.
My understanding is that it is possible to reliably (given the reliability required for lab animals) insert genes for individual proteins. I.e. if you want a transgenetic mouse line that has neurons that will fluoresce under laser light when they are firing, you can insert a gene sequence for GCaMP without too much hassle. You can even get the inserted gene to be under the control of certain promoters so that it will only activate in certain types of neurons and not others. Some really ambitious work has inserted multiple sequences for different colors of optogenetic indicators into a single mouse line.
If you want something more complicated that isn't just a sequence for a single protein or at most a few protein, never mind something nebulous on the conceptual level like "intelligence" then yeah, the technology or even basic scientific understanding is lacking.
Also, the gene insertion techniques that are reliable enough for experimenting on mice and rats aren't nearly reliable enough to use on humans (not that they even know what genes to insert in the first place for anything but the most straightforward of genetic disorders).
I’m almost certain I’ve seen EY catch shit on twitter (from actual ml researchers no less) for insinuating something very similar.
A sneer classic: https://www.reddit.com/r/SneerClub/comments/131rfg0/ey_gets_sneered_on_by_one_of_the_writers_of_the/
Well, if they were really "generalizing" just from training on crap tons of written text, they could implicitly develop a model of letters in each token based on examples of spelling and word plays and turning words into acronyms and acrostic poetry on the internet. The AI hype men would like you to think they are generalizing just off the size of their datasets and length of training and size of the models. But they aren't really "generalizing" that much (and even examples of them apparently doing any generalizing are kind of arguable) so they can't work around this weakness.
The counting failure in general is even clearer and lacks the excuse of unfavorable tokenization. The AI hype would have you believe just an incremental improvement in multi-modality or scaffolding will overcome this, but I think they need to make more fundamental improvements to the entire architecture they are using.
The thing that gets me the most about this is they can't imagine that Eliezer might genuinely be in favor of inclusive language, and thus his use of people's preferred pronouns must be a deliberate calculated political correctness move and thus in violation of the norms espoused by the sequences (which the author takes as a given the Eliezer has never broken before, and thus violating his own sequences is some sort of massive and unique problem).
To save you all having to read the rant...
—which would have been the end of the story, except that, as I explained in a subsequent–subsequent post, "A Hill of Validity in Defense of Meaning", in late 2018, Eliezer Yudkowsky prevaricated about his own philosophy of language in a way that suggested that people were philosophically confused if they disputed that men could be women in some unspecified metaphysical sense.
Also, bonus sneer points, developing weird terminology for everything, referring to Eliezer and Scott as the Caliphs of rationality.
Caliphate officials (Eliezer, Scott, Anna) and loyalists (Steven) were patronizingly consoling me
One of the top replies does call this like it is...
A meaningful meta-level reply, such as "dude, relax, and get some psychological help" will probably get me classified as an enemy, and will be interpreted as further evidence about how sick and corrupt is the mainstream-rationalist society.
Yeah, he thinks Cyc was a switch from the brilliant meta-heuristic soup of Eurisko to the dead end of expert systems, but according to the article I linked, Cycorp was still programming in extensive heuristics and meta-heuristics with the expert system entries they were making as part of it's general resolution-based inference engine, it's just that Cyc wasn't able to do anything useful with these heuristics and in fact they were slowing it down extensively, so they started turning them off in 2007 and completely turned off the general inference system in 2010!
To be ~~fair~~ far too charitable to Eliezer, this little factoid has cites from 2022 and 2023 when Lenat wrote more about lessons from Cyc, so it's not like Eliezer could have known this back in 2008. To ~~sneer~~ be actually fair to Eliezer, he should have figured they guy that actually wrote and used Eurisko and talked about how Cyc was an extension of it and repeatedly refers back to lessons of Eurisko would in fact try to include a system of heuristics and meta-heuristics in Cyc! To properly sneer at Eliezer... it probably wouldn't have helped even if Lenat kept the public up to date on the latest lessons from Cyc through academic articles, Eliezer doesn't actually keep up with the literature as it's published.