😅 honestly I don't know what else to say, the memory haunts me to this day. I think it was the point when I started going "huh, the rats make weirdly dumb mistakes considering they've made posts exactly about these kinds of error" to "wait, there's something really sinister going on here"

[-] TerribleMachines@awful.systems 10 points 1 year ago

Truer words were never spoken, probably.

CFAR is the mind killer (because they kill you and replace you with a Yud clone).

[-] TerribleMachines@awful.systems 8 points 1 year ago* (last edited 1 year ago)

Only half joking: there was this one fanfic you see...

Mainly I don't think there was any one inciting incident beyond its creation: Yud was a one man cult way before LW, and the sequences actively pushed all the cultish elements required to lose touch with reality. (Fortunately, my dyslexic ass only got as far as the earlier bits he mostly stole from other people rather than the really crazy stuff.)

There was definitely a step-change around the time CFAR was created, that was basically a recruitment mechanism for the cult and part of the reason I got anywhere physically near those rubes myself. An organisation made to help people be more rational seemed like a great idea—except it literally became EY/MIRI's personal sockpuppet. They would get people together in these fancy ass mansions for their workshops and then tell them nothing other than AI research mattered. I think it was 2014/15 when they decided internally that CFAR's mission was to create more people like Yudkowsky. I don't think its a coincidence that most of the really crazy cult stuff I've heard about happened after then.

Not that bad stuff didn't happen before either.^___^

Good point with the line! Some of the best liars are good at pretending to themselves they believe something.

I don't think its widely known, but it is known, (old sneeeclub posts about it somwhere) that he used to feed the people he was dating LSD and try to convince them they "depended" on him.

First time I met him, in a professional setting, he had his (at the time) wife kneeling at his feet wearing a collar.

Do I have hard proof he's a criminal? Probably not, at least not without digging. Do I think he is? Almost certainly.

[-] TerribleMachines@awful.systems 7 points 1 year ago* (last edited 1 year ago)

Yeah, this ~~post~~ (edit: "comment", the original post does not spark joy) sparked joy for me too (my personal cult lingo is from Marie Kondo books, whatcha gonna do)

One of my takes is that the "AI alignment" garbage is way less of a problem than "Human Alignment" i.e. how to get humans to work together and stop being jerks all the time. Absolutely wild that they can't see that, except perhaps when it comes to trying to get other humans to give them money for the AIpocalype.

Preach, as someone inside academia, the bullcrap is real. I very rarely read a paper that hasn't got a major stats issue—an academic paper is only worth something if you understand it enough to know how wrong it is or there's plenty of replication/related work building on it, ideally both. (And it's a technical field with an objective measure of truth but don't let my colleagues in humanities hear me say that—its not that their work is worthless, its just its not reliable.)

It's true, I'm terrible for it myself 😅

As you were being pedantic, allow me to be pedantic in return.

Admittedly, you might know something I don't, but I would describe Andrew Ng as an academic. These kinds of industry partnerships, like the one in that article you referred to, are really, really common in academia. In fact, it's how a lot of our research gets done. We can't do research if we don't have funding, and so a big part of being an academic is persuading companies to work with you.

Sometimes companies really, really want to work with you, and sometimes you've got to provide them with a decent value proposition. This isn't just AI research either, but very common in statistics, as well as biological sciences, physics, chemistry, well, you get the idea. Not quite the same situation in humanities, but eh, I'm in STEM.

Now, in terms of universities having the hardware, certainly these days there is no way a university will have even close to the same compute power that a large company like Google has access to. Though, "even back in" 2012, (and well before) universities had supercomputers. It was pretty common to have a resident supercomputer that you'd use. For me, and my background's orginally in physics, back then we had a supercomputer in our department, the only one at the university, and people from other departments would occasionally ask to run stuff on it. A simpler time.

It's less that universities don't have access to that compute power. It's more that they just don't run server farms. So we pay for it from Google or Amazon and so on, like everyone in the corporate world---except of course the companies that run those servers (they still have to pay costs and lost revenue). Sometimes that's subsidized by working with a big tech company, but it isn't always.

I'm not even going to get into the history of AI/ML algorithms and the role of academic contributions there, and I don't claim that the industry played no role; but the narrative that all these advancements are corporate just ain't true, compute power or no. We just don't shout so loud or build as many "products."

Yeah, you're absolutely right that MIRI didn't try any meaningful computation experiments that I've seen. As far as I can tell, their research record is... well, staring at ceilings and thinking up vacuous problems. I actually once (when I flirted with the cult) went to a seminar that the big Yud himself delivered, and he spent the whole time talking about qualia, and then when someone asked him if he could describe a research project he was actively working on, he refused to, on the basis that it was "too important to share."

"Too important to share"! I've honestly never met an academic who doesn't want to talk about their work. Big Yud is a big let down.

Love this!

Alas, if Yud took an actual physics class, he wouldn't be able to use it as the poorly defined magic system for his OC doughnut-steal IRL bayesian superintelligence fanfic.

[-] TerribleMachines@awful.systems 18 points 1 year ago

My worry in 2021 was simply that the TESCREAL bundle of ideologies itself contains all the ingredients needed to “justify,” in the eyes of true believers, extreme measures to “protect” and “preserve” what Bostrom’s colleague, Toby Ord, describes as our “vast and glorious” future among the heavens.

Golly gee, those sure are all the ingredients for white supremacy these folk are playing around with what, good job there are no signs of racism... right, right?!?!

In other news, I find it wild that big Yud has gone on an arc from "I will build an AI to save everyone" to "let's do a domestic terrorism against AI researchers." He should be careful, someone might this this is displaced rage at his own failure to make any kind of intellectual progress while academic AI researchers have passed him by.

(Idk if anyone remembers how salty he was when AlphaGo showed up and crapped all over his "symbolic AI is the only way" mantra, but it's pretty funny to me that the very group of people he used to say were incompetent are a "threat" to him now they're successful. Schoolyard bully stuff and wotnot.)

[-] TerribleMachines@awful.systems 14 points 1 year ago

Having lurked for a long time, sneerclub is aimed at people who already have a good idea of the horror of TESCREAL groups—the point isn't to attract new members, but catharsis for those of us that have had to deal with the TechBros/Facists etc.

and for sneering, the sneering is important.

Getting real for a moment, for me, I used to be in deep with these people and then my friends in the community commited suicide due the rampant sexual abuse and I got the hell out. Sneer club was the only place the reports of assault were taken seriously, while the TESCREALs all closed ranks.

It's all a way back for me now, but I love this place. That there is a tiny part of the Internet out there that calls these people on their shit and sneers gives me so much peace.

(For sneerclubbers reading this; thanks folks, you're the best! ✨️)

[-] TerribleMachines@awful.systems 12 points 1 year ago

At the risk of being NSFW.

When I met Yud some years ago, I asked him how he goes about learning new things, his answer was roughly: "Scroll on Facebook until I find someone who has written about it." Maybe he actually read some of the sources he references a long time ago but I think he gave up on learning new things and has sat comfortably abusing his power over the community.

Egads these people are gross.

view more: next ›

TerribleMachines

joined 1 year ago