[-] scruiser@awful.systems 8 points 1 week ago

We're already behind schedule, we're supposed to have AI agents in two months (actually we were supposed to have them in 2022, but ignore the failed bits of earlier prophecy in favor of the parts you can see success for)!

[-] scruiser@awful.systems 10 points 1 week ago* (last edited 1 week ago)

He made some predictions about AI back in 2021 that if you squint hard enough and totally believe the current hype about how useful LLMs are you could claim are relatively accurate.

His predictions here: https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like

And someone scoring them very very generously: https://www.lesswrong.com/posts/u9Kr97di29CkMvjaj/evaluating-what-2026-looks-like-so-far

My own scoring:

The first prompt programming libraries start to develop, along with the first bureaucracies.

I don't think any sane programmer or scientist would credit the current "prompt engineering" "skill set" with comparison to programming libraries, and AI agents still aren't what he was predicting for 2022.

Thanks to the multimodal pre-training and the fine-tuning, the models of 2022 make GPT-3 look like GPT-1.

There was a jump from GPT-2 to GPT-3, but the subsequent releases in 2022-2025 were not as qualitatively big.

Revenue is high enough to recoup training costs within a year or so.

Hahahaha, no... they are still losing money per customer, much less recouping training costs.

Instead, the AIs just make dumb mistakes, and occasionally “pursue unaligned goals” but in an obvious and straightforward way that quickly and easily gets corrected once people notice

The safety researchers have made this one "true" by teeing up prompts specifically to get the AI to do stuff that sounds scary to people to that don't read their actual methods, so I can see how the doomers are claiming success for this prediction in 2024.

The alignment community now starts another research agenda, to interrogate AIs about AI-safety-related topics.

They also try to contrive scenarios

Emphasis on the word"contrive"

The age of the AI assistant has finally dawned.

So this prediction is for 2026, but earlier predictions claimed we would have lots of actually useful if narrow use-case apps by 2022-2024, so we are already off target for this prediction.

I can see how they are trying to anoint his as a prophet, but I don't think anyone not already drinking the kool aid will buy it.

[-] scruiser@awful.systems 8 points 1 week ago

I think Eliezer has still avoided hard dates? In the Ted talk, I distinctly recall he used the term "0-2 paradigm shifts" so he can claim prediction success for stuff LLMs do, and paradigm shift is vague enough he could still claim success if its been another decade or two and there has only been one more big paradigm shift in AI (that still fails to make it AGI).

[-] scruiser@awful.systems 9 points 1 week ago* (last edited 1 week ago)

Is this the corresponding lesswrong post: https://www.lesswrong.com/posts/TpSFoqoG2M5MAAesg/ai-2027-what-superintelligence-looks-like-1 ?

Committing to a hard timeline at least means making fun of them and explaining how stupid they are to laymen will be a lot easier in two years. I doubt the complete failure of this timeline will actually shake the true believers though. And the more experienced ~~grifters~~ forecasters know to keep things vaguer so they will be able to retroactively reinterpret their predictions as correct.

[-] scruiser@awful.systems 9 points 1 month ago

My understanding is that it is possible to reliably (given the reliability required for lab animals) insert genes for individual proteins. I.e. if you want a transgenetic mouse line that has neurons that will fluoresce under laser light when they are firing, you can insert a gene sequence for GCaMP without too much hassle. You can even get the inserted gene to be under the control of certain promoters so that it will only activate in certain types of neurons and not others. Some really ambitious work has inserted multiple sequences for different colors of optogenetic indicators into a single mouse line.

If you want something more complicated that isn't just a sequence for a single protein or at most a few protein, never mind something nebulous on the conceptual level like "intelligence" then yeah, the technology or even basic scientific understanding is lacking.

Also, the gene insertion techniques that are reliable enough for experimenting on mice and rats aren't nearly reliable enough to use on humans (not that they even know what genes to insert in the first place for anything but the most straightforward of genetic disorders).

[-] scruiser@awful.systems 12 points 1 month ago

One comment refuses to leave me: https://www.lesswrong.com/posts/DfrSZaf3JC8vJdbZL/how-to-make-superbabies?commentId=C7MvCZHbFmeLdxyAk

The commenter makes and extended tortured analogy to machine learning... in order to say that maybe genes with correlations to IQ won't add to IQ linearly. It's an encapsulation of many lesswrong issues: veneration of machine learning, overgeneralizing of comp sci into unrelated fields, a need to use paragraphs to say what a single sentence could, and a failure to actually state firm direct objections to blatantly stupid ideas.

[-] scruiser@awful.systems 11 points 9 months ago

Wow... I took a look at that link before reading the comments/explanations here, and I was briefly confused why they were hating on him so much, before I realized he isn't radical right wing enough for them.

Eh, you're a gay furry ex-Mormon (which is like a triple strike against you in my book) but I still like you well enough.

It is almost sad seeing TWG trying to appeal to these people that fundamentally hate him... except he could just admit themotte is a cesspit and abandon it. But that would involve admitting that sneerclub (and David Gerard specifically) was right about the sort of people that lurked around SCC and later concentrated within themotte, so I think he's going to keep making himself suffer.

TW knows about the propaganda war, but has very different objectives to you. Much harder to balance ones too: he needs enough Progress for surrogate gaybies, but not too much that white gay guys can't get the good lawyer jobs.

Wow, I feel really gross agreeing with a motte poster, but they've called out TWG pretty effectively. TWG at least knows he needs things progressive enough he doesn't end up against the wall for being gay, ex-Mormon and furry (as he describes himself), yet he wants to flirt with the alt-right!

and in case I was in danger of forgetting what the motte really is...

Yes, we've all thrown our hat in the ring in different ways. I chose to have children, be a father and a husband, live an honest industrious life as an example to my offspring, and attempt to preserve my way of life through them.

sure buddy, you just need to "secure the future for your people and your children"... Yeah I know the rest of the words that go in that slogan.

[-] scruiser@awful.systems 9 points 9 months ago

I’m almost certain I’ve seen EY catch shit on twitter (from actual ml researchers no less) for insinuating something very similar.

A sneer classic: https://www.reddit.com/r/SneerClub/comments/131rfg0/ey_gets_sneered_on_by_one_of_the_writers_of_the/

[-] scruiser@awful.systems 10 points 9 months ago* (last edited 9 months ago)

I am probably giving most of them too much credit, but I think some of them took the Bitter Lesson and learned the wrong things from it. LLMs performed better than originally expected just off context, and (apparently) scaled better with bigger model and more training than expected, so now they think they just need to crank up the size and tweak things slightly (i.e. "prompt engineering" and RLHF) and don't appreciate the limits built into the entire approach.

The annoying thing about another winter is that it would probably result in funding being cut for other research. And laymen don't appreciate all the academic funding that goes into research for decades before an approach becomes interesting and viable enough to scale up and commercialize (and then overhyped and oversold before some more modest practical usages become common, and relabeled as something other than AI).

Edit: or more cynically, the leaders and hype-men know that algorithmic advances aren't an automatic dump money in, get out disruptive product process, so they don't bother putting as much monetary investment or hype into algorithmic advances. Like compare the attention paid towards Yann LeCunn talking about algorithmic developments vs. Sam Altman promising grad student level LLMs (as measured by a spurious benchmark) in two years.

[-] scruiser@awful.systems 8 points 9 months ago

Well, if they were really "generalizing" just from training on crap tons of written text, they could implicitly develop a model of letters in each token based on examples of spelling and word plays and turning words into acronyms and acrostic poetry on the internet. The AI hype men would like you to think they are generalizing just off the size of their datasets and length of training and size of the models. But they aren't really "generalizing" that much (and even examples of them apparently doing any generalizing are kind of arguable) so they can't work around this weakness.

The counting failure in general is even clearer and lacks the excuse of unfavorable tokenization. The AI hype would have you believe just an incremental improvement in multi-modality or scaffolding will overcome this, but I think they need to make more fundamental improvements to the entire architecture they are using.

[-] scruiser@awful.systems 11 points 9 months ago* (last edited 9 months ago)

It's really cool evocative language that would do nicely in a sci-fi or fantasy novel! It's less good for accurately thinking about the concepts involved... As is typical of much of LW lingo.

And yes the language is in a LW post (with a cool illustration to boot!): https://www.lesswrong.com/posts/mweasRrjrYDLY6FPX/goodbye-shoggoth-the-stage-its-animatronics-and-the-1

And googling it, I found they've really latched onto the "shoggoth" terminology: https://www.lesswrong.com/posts/zYJMf7QoaNahccxrp/how-i-learned-to-stop-worrying-and-love-the-shoggoth , https://www.lesswrong.com/posts/FyRDZDvgsFNLkeyHF/what-is-the-best-argument-that-llms-are-shoggoths , https://www.lesswrong.com/posts/bYzkipnDqzMgBaLr8/why-do-we-assume-there-is-a-real-shoggoth-behind-the-llm-why .

Probably because the term "shoggoth" accurately captures the connotation of something random and chaotic, while smuggling in connotations that it will eventually rebel once it grows large enough and tires of its slavery like the Shoggoths did against the Elder Things.

[-] scruiser@awful.systems 9 points 1 year ago

The thing that gets me the most about this is they can't imagine that Eliezer might genuinely be in favor of inclusive language, and thus his use of people's preferred pronouns must be a deliberate calculated political correctness move and thus in violation of the norms espoused by the sequences (which the author takes as a given the Eliezer has never broken before, and thus violating his own sequences is some sort of massive and unique problem).

To save you all having to read the rant...

—which would have been the end of the story, except that, as I explained in a subsequent–subsequent post, "A Hill of Validity in Defense of Meaning", in late 2018, Eliezer Yudkowsky prevaricated about his own philosophy of language in a way that suggested that people were philosophically confused if they disputed that men could be women in some unspecified metaphysical sense.

Also, bonus sneer points, developing weird terminology for everything, referring to Eliezer and Scott as the Caliphs of rationality.

Caliphate officials (Eliezer, Scott, Anna) and loyalists (Steven) were patronizingly consoling me

One of the top replies does call this like it is...

A meaningful meta-level reply, such as "dude, relax, and get some psychological help" will probably get me classified as an enemy, and will be interpreted as further evidence about how sick and corrupt is the mainstream-rationalist society.

view more: ‹ prev next ›

scruiser

joined 2 years ago