[-] scruiser@awful.systems 8 points 1 month ago* (last edited 1 month ago)

Nice job summarizing the lore in only 19 minutes (I assume this post was aimed at providing full context to people just joining or at least relatively new to tracking all this... stuff).

Some snarky comments, not because it wasn't a good summary that should have included them (all the asides you could add could easily double the length and leave a casual listener/reader more confused), but because I think they are funny ~~and I need to vent~~

You’ll see him quoted in the press as an “AI researcher” or similar.

Or decision theorist! With an entire one decision theory paper that he didn't bother getting through peer review because the reviewers wanted, like actual context, and an actual decision theory and not just hand waves at paradoxes on the fringes of decision theory.

What Yudkowsky actually does is write blog posts.

He also writes fanfiction!

I’m not even getting to the Harry Potter fanfic, the cult of Ziz, or Roko’s basilisk today!

Yeah this rabbit hole is deep.

The goal of LessWrong rationality is so Eliezer Yudkowsky can live forever as an emulated human mind running on the future superintelligent AI god computer, to end death itself.

Yeah in hindsight the large number of ex-Christians it attracts makes sense.

And a lot of Yudkowsky’s despair is that his most devoted acolytes heard his warnings “don’t build the AI Torment Nexus, you idiots” and they all went off to start companies building the AI Torment Nexus.

He wrote a lot of blog posts about how smart and powerful the Torment Nexus would be, and how we really need to build the Anti-Torment Nexus, so if he had proper skepticism of Silicon Valley and Startup/VC Culture, he really should have seen this coming

There was also a huge controversy in Effective Altruism last year when half the Effective Altruists were shocked to discover the other half were turbo-racists who’d invited literal neo-Nazis to Effective Altruism conferences. The pro-racism faction won.

I was mildly pleasantly surprised to see there was a solid half pushing back in the comments in the response to the first manifest, but it looks like the anti-racism faction didn't get any traction to change anything and the second manifest conference was just as bad or worse.

[-] scruiser@awful.systems 9 points 1 month ago

One of the comments really annoyed me:

The “genetics is meaningless at the individual level” argument has always struck me as a bit of an ivory-tower oversimplification.

No, its pushing back at eugenicist with completely fallacious ideas. See for example Genesmith's posts on Lesswrong. They are like concentrated Genetics Dunning-Kruger and the lesswrongers eat them up.

No one is promising perfect prediction.

Yes they are, see Kelsey Piper's comments about superbabies, or Eliezer worldbuilding about dath Ilan's eugenics, or Genesmith's totally wacko ideas.

[-] scruiser@awful.systems 9 points 5 months ago

You betcha it is. The lab leak theory (with added fear over gain of function research analogized with AGI research) conspiracy mongering is a popular "viewpoint" on lesswrong, aided, as typical, by the misapplication of bayes theorem, and dunning-kruger misreading of the "evidence".

[-] scruiser@awful.systems 8 points 5 months ago

I guess anti-communist fears and libertarian bias outweighs their fetishization of East Asians when it comes to the CCP?

I haven't seen any articles on the EA forums about spreading to China... China does have billionaires and philanthropists, but, judging by Jack Ma's example, when they start talking big about altering society (in ways that just so happen to benefit the billionaires), they get to take a vacation from the public eye for a few months... so that might get in the way of EA billionaire activism?

[-] scruiser@awful.systems 10 points 5 months ago

Yep. They've already used doomerism to drive LLM hype, this fearmonering of China is just an extension of that, but worse yet, it is something both the doomers and accelerationists can (mostly) agree on (although the doomers are always quick to emphasize the real threat it the AGI) and it is a lot more legible to existing war hawk "thinking".

[-] scruiser@awful.systems 10 points 5 months ago* (last edited 5 months ago)

He made some predictions about AI back in 2021 that if you squint hard enough and totally believe the current hype about how useful LLMs are you could claim are relatively accurate.

His predictions here: https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like

And someone scoring them very very generously: https://www.lesswrong.com/posts/u9Kr97di29CkMvjaj/evaluating-what-2026-looks-like-so-far

My own scoring:

The first prompt programming libraries start to develop, along with the first bureaucracies.

I don't think any sane programmer or scientist would credit the current "prompt engineering" "skill set" with comparison to programming libraries, and AI agents still aren't what he was predicting for 2022.

Thanks to the multimodal pre-training and the fine-tuning, the models of 2022 make GPT-3 look like GPT-1.

There was a jump from GPT-2 to GPT-3, but the subsequent releases in 2022-2025 were not as qualitatively big.

Revenue is high enough to recoup training costs within a year or so.

Hahahaha, no... they are still losing money per customer, much less recouping training costs.

Instead, the AIs just make dumb mistakes, and occasionally “pursue unaligned goals” but in an obvious and straightforward way that quickly and easily gets corrected once people notice

The safety researchers have made this one "true" by teeing up prompts specifically to get the AI to do stuff that sounds scary to people to that don't read their actual methods, so I can see how the doomers are claiming success for this prediction in 2024.

The alignment community now starts another research agenda, to interrogate AIs about AI-safety-related topics.

They also try to contrive scenarios

Emphasis on the word"contrive"

The age of the AI assistant has finally dawned.

So this prediction is for 2026, but earlier predictions claimed we would have lots of actually useful if narrow use-case apps by 2022-2024, so we are already off target for this prediction.

I can see how they are trying to anoint his as a prophet, but I don't think anyone not already drinking the kool aid will buy it.

[-] scruiser@awful.systems 9 points 5 months ago* (last edited 5 months ago)

Is this the corresponding lesswrong post: https://www.lesswrong.com/posts/TpSFoqoG2M5MAAesg/ai-2027-what-superintelligence-looks-like-1 ?

Committing to a hard timeline at least means making fun of them and explaining how stupid they are to laymen will be a lot easier in two years. I doubt the complete failure of this timeline will actually shake the true believers though. And the more experienced ~~grifters~~ forecasters know to keep things vaguer so they will be able to retroactively reinterpret their predictions as correct.

[-] scruiser@awful.systems 9 points 7 months ago

My understanding is that it is possible to reliably (given the reliability required for lab animals) insert genes for individual proteins. I.e. if you want a transgenetic mouse line that has neurons that will fluoresce under laser light when they are firing, you can insert a gene sequence for GCaMP without too much hassle. You can even get the inserted gene to be under the control of certain promoters so that it will only activate in certain types of neurons and not others. Some really ambitious work has inserted multiple sequences for different colors of optogenetic indicators into a single mouse line.

If you want something more complicated that isn't just a sequence for a single protein or at most a few protein, never mind something nebulous on the conceptual level like "intelligence" then yeah, the technology or even basic scientific understanding is lacking.

Also, the gene insertion techniques that are reliable enough for experimenting on mice and rats aren't nearly reliable enough to use on humans (not that they even know what genes to insert in the first place for anything but the most straightforward of genetic disorders).

[-] scruiser@awful.systems 9 points 1 year ago

I’m almost certain I’ve seen EY catch shit on twitter (from actual ml researchers no less) for insinuating something very similar.

A sneer classic: https://www.reddit.com/r/SneerClub/comments/131rfg0/ey_gets_sneered_on_by_one_of_the_writers_of_the/

[-] scruiser@awful.systems 9 points 1 year ago

Well, if they were really "generalizing" just from training on crap tons of written text, they could implicitly develop a model of letters in each token based on examples of spelling and word plays and turning words into acronyms and acrostic poetry on the internet. The AI hype men would like you to think they are generalizing just off the size of their datasets and length of training and size of the models. But they aren't really "generalizing" that much (and even examples of them apparently doing any generalizing are kind of arguable) so they can't work around this weakness.

The counting failure in general is even clearer and lacks the excuse of unfavorable tokenization. The AI hype would have you believe just an incremental improvement in multi-modality or scaffolding will overcome this, but I think they need to make more fundamental improvements to the entire architecture they are using.

[-] scruiser@awful.systems 8 points 1 year ago

Yudkowsky’s original rule-set

Yeah the original no-politics rule on lesswrong baked in libertarian assumptions into the discourse (because no-politics means the default political assumptions of the major writers and audience are free to take over). From there is was just a matter of time until it ended up somewhere right wing.

“object level” vs “meta level” dichotomy

I hadn't linked the tendency to go meta to the cultishness or no-politics rule before, but I can see the connection now that you point it out. As you say, it prevents simply naming names and direct quotes, which seems to be a pretty good tactic for countering racists.

could not but have been the eventual outcome of the same rule-set

I'm not sure that rule-set made HBD hegemony inevitable, there were a lot of other factors that helped along the way! The IQ-fetishism made it ripe for HBDers. The edgy speculative futurism is also fertile ground for HBD infestation. And the initial audience and writings having a libertarian bend made the no-politics rule favor right wing ideology, an initial audience and writing set with a strong left wing bend might go in a different direction (not that a tankie internal movement would be good, but at least I don't know tankies to be HBD proponents).

just to be normal

Yeah, it seems really rare for a commenter to simply say racism is bad, you shouldn't invite racists to your events. Even the ones that seem to disagree with racism impulsively engage in hand wringing and apologize for being offended and carefully moderate their condemnation of racism and racists.

[-] scruiser@awful.systems 9 points 2 years ago

The thing that gets me the most about this is they can't imagine that Eliezer might genuinely be in favor of inclusive language, and thus his use of people's preferred pronouns must be a deliberate calculated political correctness move and thus in violation of the norms espoused by the sequences (which the author takes as a given the Eliezer has never broken before, and thus violating his own sequences is some sort of massive and unique problem).

To save you all having to read the rant...

—which would have been the end of the story, except that, as I explained in a subsequent–subsequent post, "A Hill of Validity in Defense of Meaning", in late 2018, Eliezer Yudkowsky prevaricated about his own philosophy of language in a way that suggested that people were philosophically confused if they disputed that men could be women in some unspecified metaphysical sense.

Also, bonus sneer points, developing weird terminology for everything, referring to Eliezer and Scott as the Caliphs of rationality.

Caliphate officials (Eliezer, Scott, Anna) and loyalists (Steven) were patronizingly consoling me

One of the top replies does call this like it is...

A meaningful meta-level reply, such as "dude, relax, and get some psychological help" will probably get me classified as an enemy, and will be interpreted as further evidence about how sick and corrupt is the mainstream-rationalist society.

view more: ‹ prev next ›

scruiser

joined 2 years ago