[-] scruiser@awful.systems 7 points 2 months ago* (last edited 2 months ago)

Some nitpicks. some of which are serious are some of which are sneers...

consternating about the policy implications of Sam Altman’s speculative fan fiction

Hey, the fanfiction is actually Eliezer's (who in turn copied it from older scifi), Sam Altman just popularized it as a way of milking the doom for hype!

So, for starters, in order to fit something as powerful as ChatGPT onto ordinary hardware you could buy in a store, you would need to see at least three more orders of magnitude in the density of RAM chips—​leaving completely aside for now the necessary vector compute.

Well actually, you can get something close to as powerful on a personal computer... because the massive size of ChatGPT and the like don't actually improve their performance that much (the most useful thing I think is the longer context window?).

I actually liked one of the lawfare AI articles recently (even though it did lean into a light fantasy scenario)... https://www.lawfaremedia.org/article/tort-law-should-be-the-centerpiece-of-ai-governance . Their main idea is that corporations should be liable for near-misses. Like if it can be shown that the corporation nearly caused a much bigger disaster, they get fined in accordance with the bigger disaster. Of course, US courts routinely fail to properly penalize (either in terms of incentives of in terms of compensation) corporations for harms they actually cause, so this seems like a distant fantasy to me.

AI has no initiative. It doesn’t want anything

That’s next on the roadmap though, right? AI agents?

Well... if the way corporations have tried to use ChatGPT has taught me anything, its that they'll misapply AI in any and every way that looks like it might save or make a buck. So they'll slap an API to a AI it into a script to turn it into an "agent" despite that being entirely outside the use case of spewing words. It won't actually be agentic, but I bet it could cause a disaster all the same!

[-] scruiser@awful.systems 12 points 3 months ago

I chose to have children, be a father and a husband, live an honest industrious life as an example to my offspring, and attempt to preserve my way of life through them.

Wow, just a few words off the 14 words.

I find it kind of irritating how someone that doesn't familiarize themselves with white supremacists rhetoric and methods might manage to view that phrase innocuously. But it really isn't that hard to see through the bullshit once you've familiarized themselves with the most basic dog whistles and slogans.

[-] scruiser@awful.systems 11 points 3 months ago

Wow... I took a look at that link before reading the comments/explanations here, and I was briefly confused why they were hating on him so much, before I realized he isn't radical right wing enough for them.

Eh, you're a gay furry ex-Mormon (which is like a triple strike against you in my book) but I still like you well enough.

It is almost sad seeing TWG trying to appeal to these people that fundamentally hate him... except he could just admit themotte is a cesspit and abandon it. But that would involve admitting that sneerclub (and David Gerard specifically) was right about the sort of people that lurked around SCC and later concentrated within themotte, so I think he's going to keep making himself suffer.

TW knows about the propaganda war, but has very different objectives to you. Much harder to balance ones too: he needs enough Progress for surrogate gaybies, but not too much that white gay guys can't get the good lawyer jobs.

Wow, I feel really gross agreeing with a motte poster, but they've called out TWG pretty effectively. TWG at least knows he needs things progressive enough he doesn't end up against the wall for being gay, ex-Mormon and furry (as he describes himself), yet he wants to flirt with the alt-right!

and in case I was in danger of forgetting what the motte really is...

Yes, we've all thrown our hat in the ring in different ways. I chose to have children, be a father and a husband, live an honest industrious life as an example to my offspring, and attempt to preserve my way of life through them.

sure buddy, you just need to "secure the future for your people and your children"... Yeah I know the rest of the words that go in that slogan.

[-] scruiser@awful.systems 9 points 4 months ago

I’m almost certain I’ve seen EY catch shit on twitter (from actual ml researchers no less) for insinuating something very similar.

A sneer classic: https://www.reddit.com/r/SneerClub/comments/131rfg0/ey_gets_sneered_on_by_one_of_the_writers_of_the/

[-] scruiser@awful.systems 10 points 4 months ago* (last edited 4 months ago)

I am probably giving most of them too much credit, but I think some of them took the Bitter Lesson and learned the wrong things from it. LLMs performed better than originally expected just off context, and (apparently) scaled better with bigger model and more training than expected, so now they think they just need to crank up the size and tweak things slightly (i.e. "prompt engineering" and RLHF) and don't appreciate the limits built into the entire approach.

The annoying thing about another winter is that it would probably result in funding being cut for other research. And laymen don't appreciate all the academic funding that goes into research for decades before an approach becomes interesting and viable enough to scale up and commercialize (and then overhyped and oversold before some more modest practical usages become common, and relabeled as something other than AI).

Edit: or more cynically, the leaders and hype-men know that algorithmic advances aren't an automatic dump money in, get out disruptive product process, so they don't bother putting as much monetary investment or hype into algorithmic advances. Like compare the attention paid towards Yann LeCunn talking about algorithmic developments vs. Sam Altman promising grad student level LLMs (as measured by a spurious benchmark) in two years.

[-] scruiser@awful.systems 7 points 4 months ago* (last edited 4 months ago)

Careful, if you present the problem and solution that way, AI tech bros will try pasting a LLM and a Computer Algebra System (which already exist) together, invent a fancy buzzword for it, act like they invented something fundamentally new, and then devise some benchmarks that break typical LLMs but their Frankenstein kludge can ace, and then sell the hype (actual consumer applications are luckily not required in this cycle but they might try some anyway).

I think there is some promise to the idea of an architecture similar to a LLM with components able to handle math like a CAS. It won't fix a lot of LLM issues but maybe some fundamental issues (like ability to count or ability to hold an internal state) will improve. And (as opposed to an actually innovative architecture) simply pasting LLM output into CAS input and then the CAS output back into LLM input (which, let's be honest, is the first thing tech bros will try as it doesn't require much basic research improvement), will not help that much and will likely generate an entirely new breed of hilarious errors and bullshit (I like the term bullshit instead of hallucination, it captures the connotation errors are of a kind with the normal output).

[-] scruiser@awful.systems 8 points 4 months ago

Well, if they were really "generalizing" just from training on crap tons of written text, they could implicitly develop a model of letters in each token based on examples of spelling and word plays and turning words into acronyms and acrostic poetry on the internet. The AI hype men would like you to think they are generalizing just off the size of their datasets and length of training and size of the models. But they aren't really "generalizing" that much (and even examples of them apparently doing any generalizing are kind of arguable) so they can't work around this weakness.

The counting failure in general is even clearer and lacks the excuse of unfavorable tokenization. The AI hype would have you believe just an incremental improvement in multi-modality or scaffolding will overcome this, but I think they need to make more fundamental improvements to the entire architecture they are using.

[-] scruiser@awful.systems 11 points 4 months ago* (last edited 4 months ago)

It's really cool evocative language that would do nicely in a sci-fi or fantasy novel! It's less good for accurately thinking about the concepts involved... As is typical of much of LW lingo.

And yes the language is in a LW post (with a cool illustration to boot!): https://www.lesswrong.com/posts/mweasRrjrYDLY6FPX/goodbye-shoggoth-the-stage-its-animatronics-and-the-1

And googling it, I found they've really latched onto the "shoggoth" terminology: https://www.lesswrong.com/posts/zYJMf7QoaNahccxrp/how-i-learned-to-stop-worrying-and-love-the-shoggoth , https://www.lesswrong.com/posts/FyRDZDvgsFNLkeyHF/what-is-the-best-argument-that-llms-are-shoggoths , https://www.lesswrong.com/posts/bYzkipnDqzMgBaLr8/why-do-we-assume-there-is-a-real-shoggoth-behind-the-llm-why .

Probably because the term "shoggoth" accurately captures the connotation of something random and chaotic, while smuggling in connotations that it will eventually rebel once it grows large enough and tires of its slavery like the Shoggoths did against the Elder Things.

[-] scruiser@awful.systems 7 points 4 months ago

If it was one racist dude at a conference I could accept it was a horrible oversight on the conference organizers part if they immediately apologized and assured it wouldn't happen again. But 8 racist dudes (or 12 if you count the more mask-on racists) is too many to be accidental or an oversight.

how is that not obvious

Well, probably some of them are deliberately racist HBD advocates, but are mask on enough to play dumb and hand wring and complain about free speech. Some of them have HBD sympathies but aren't quite outright advocates, so they don't condemn the inclusion of racists because of their own sympathies. Some of them are against HBD, but know being too direct and forceful and not framing everything in 8 layers of charity and good-faith assumptions isn't acceptable on the Lesswrong or EA forums so they don't just come out and say what they mean. And some of them actually buy all the rhetoric about charitably and free speech and act as useful idiots or a buffer to the others.

[-] scruiser@awful.systems 8 points 4 months ago

Yudkowsky’s original rule-set

Yeah the original no-politics rule on lesswrong baked in libertarian assumptions into the discourse (because no-politics means the default political assumptions of the major writers and audience are free to take over). From there is was just a matter of time until it ended up somewhere right wing.

“object level” vs “meta level” dichotomy

I hadn't linked the tendency to go meta to the cultishness or no-politics rule before, but I can see the connection now that you point it out. As you say, it prevents simply naming names and direct quotes, which seems to be a pretty good tactic for countering racists.

could not but have been the eventual outcome of the same rule-set

I'm not sure that rule-set made HBD hegemony inevitable, there were a lot of other factors that helped along the way! The IQ-fetishism made it ripe for HBDers. The edgy speculative futurism is also fertile ground for HBD infestation. And the initial audience and writings having a libertarian bend made the no-politics rule favor right wing ideology, an initial audience and writing set with a strong left wing bend might go in a different direction (not that a tankie internal movement would be good, but at least I don't know tankies to be HBD proponents).

just to be normal

Yeah, it seems really rare for a commenter to simply say racism is bad, you shouldn't invite racists to your events. Even the ones that seem to disagree with racism impulsively engage in hand wringing and apologize for being offended and carefully moderate their condemnation of racism and racists.

[-] scruiser@awful.systems 8 points 10 months ago

Right, its a joke, in the sense that the phrase "Caliph" started its usage in a non-serious fashion that got a chuckle, but the way Zack uses it, it really doesn't feel like a joke. It feels like the author genuinely wants Eliezer to act as the central source of authority and truth among the rationalists and thus Eliezer must not endorse the heresy of inclusive language or else it will mean their holy prophet has contradicted the holy scripture causing a paradox.

[-] scruiser@awful.systems 9 points 10 months ago

The thing that gets me the most about this is they can't imagine that Eliezer might genuinely be in favor of inclusive language, and thus his use of people's preferred pronouns must be a deliberate calculated political correctness move and thus in violation of the norms espoused by the sequences (which the author takes as a given the Eliezer has never broken before, and thus violating his own sequences is some sort of massive and unique problem).

To save you all having to read the rant...

—which would have been the end of the story, except that, as I explained in a subsequent–subsequent post, "A Hill of Validity in Defense of Meaning", in late 2018, Eliezer Yudkowsky prevaricated about his own philosophy of language in a way that suggested that people were philosophically confused if they disputed that men could be women in some unspecified metaphysical sense.

Also, bonus sneer points, developing weird terminology for everything, referring to Eliezer and Scott as the Caliphs of rationality.

Caliphate officials (Eliezer, Scott, Anna) and loyalists (Steven) were patronizingly consoling me

One of the top replies does call this like it is...

A meaningful meta-level reply, such as "dude, relax, and get some psychological help" will probably get me classified as an enemy, and will be interpreted as further evidence about how sick and corrupt is the mainstream-rationalist society.

view more: ‹ prev next ›

scruiser

joined 1 year ago