14

The local Effective Altruism chapter had a stand at the university hobby fair.

Last time I read their charity guide spam email for student clubs, they were still mostly into the relatively benign end of EA stuff, listing some charities they had deemed most effective by some methodology. My curiosity got the best of me and I went to talk to them. I wanted to find out if they'd started pushing seedier stuff and whether the people at the stand were aware of the dark side of TESCREAL.

They seemed to have gotten into AI risk stuff, which was not surprising. Also, they seemed to be unaware of most of the incidents and critics I referred to, mostly only knowing about the FTX debacle.

They invited me to attend their AI risk discussion event, saying (as TREACLES adjacents always do) that they love hearing criticism and different points of view and so on.

On one hand, EA is not super big here and most of their members and prospectively interested participants are probably not that invested in the movement yet. This could be an opportunity to spread awareness of the dark side of EA and its adjacent movements and maybe prevent some people from falling for the cult stuff.

On the other hand, acting as the spokesman for the opposing case is a big responsibility and the preparation is a lot of work. I'm slightly worried that pushing back at the event might escalate into a public debate or even worse, some kind of Ben Shapiro style affair where I'm DESTROYED with FACTS and LOGIC by some guy with a microphone and a primed audience. Also, dealing with these people is usually just plain exhausting.

So, I'm feeling conflicted and would like some advice from the best possible source: random people on the internet. Do y'all think it's a good idea to go? Do you think it's a terrible idea?

top 14 comments
sorted by: hot top controversial new old
[-] titotal@awful.systems 12 points 1 year ago

My impression is that the toxicity within EA is mainly concentrated in the bay area rationalists, and in a few of the actual EA organizations. If it's just a local meetup group, it's probably just going to be some regular-ish people with some mistaken beliefs that are genuinely concerned about AI.

Just be polite and present arguments, and you might actually change minds, at least among those who haven't been sucked too far into Rationalism.

[-] Soyweiser@awful.systems 8 points 1 year ago* (last edited 1 year ago)

On the other hand, acting as the spokesman for the opposing case is a big responsibility and the preparation is a lot of work.

Not just that, but there also is a risk factor involved. Esp as this makes you doxable from here, and there are people who accept the idea that sneerclub is the most evil people around. So be a bit careful. I think David Gerard mostly escapes unscathed (apart from people going after his wikipedia account) but Émile P. Torres (xriskology) has gotten death threats. (I'm just mentioning the things I heard of, might be more). (Timnit Gebru apparently has an online stalker who replies to a lot she says (and keeps 'they/them' misgendering her for some weird reason)).

Anyway more on topic, how EY treated the guy who leaked the emails showing that Scott was way more into NRx than he let on might be a good point to show just how bad the EA-sphere is. (I know LW isn't EA, and SSC isn't EA but it is related, and pretending it isn't is a bit of a motte/bailey).

E: do note I lean heavily into being more paranoid, so take that into account in your risk assessment.

[-] dgerard@awful.systems 8 points 1 year ago* (last edited 1 year ago)

I've been hearing about the death threats other crypto critics have gotten and I'm disconcerted I'm not receiving the same. I've posted quite enough information on Twitter to work out precisely where my house is! *

* that it's in E4, that it's almost on the 0 degrees Meridian, photos of my back yard. I'm about to move out so i'm happy to say "I gave you all the clues Mr Cultist"

[-] froztbyte@awful.systems 3 points 1 year ago

Leave a note in the cupboard or under some floortiles or something

[-] kuna@awful.systems 4 points 1 year ago

and keeps ‘they/them’ misgendering her for some weird reason

I'm not sure if this is the case here, but there is a pattern of weirdos targetting cis black women with transphobia for whatever reason, starting with Michelle Obama.

Mia Mulder made a video about that (among other weird transphobia): https://youtu.be/QH5-MDXzfmg?si=ELZPA2a2dpzNqaRp

[-] swlabr@awful.systems 7 points 1 year ago* (last edited 1 year ago)

Like any IRL situation it’s probably more pertinent to read the room and be present, rather than theorycraft about what might happen.

That being said, my gut says this: there are going to be a large share of TREACLES peoples in the crowd. Here’s my argument.

  • The crowd will be mostly EA people.
  • Anyone in EA willing to go to an EA hosted talk about AI X risk is probably beyond the eye deworming charity phase of EA.

This isn’t the scientology personality test phase of EA, it’s the private seminar phase before they teach you xenu phase, except their proselytisers aren’t nearly as charismatic/well trained in conversion.

I think the most viable targets for any detreacling would be any friends or tagger-ons to the event. Sorta like how if you are arguing on the internet, you don’t really hope to change the other party’s mind; you’re more hoping to sway anyone who comes along and reads the thread.

I’d go, personally, if only for the spectacle.

[-] dgerard@awful.systems 6 points 1 year ago

i'd concur - you will change 0 minds whatever you say. You may introduce seeds of doubt for later. You will get evangelists testing the slogans they've been taught on you in real time.

[-] self@awful.systems 7 points 1 year ago

it’s important to remember what happens when these folks try to debate SneerClub — no minds are changed, they get to jack off to their own slogans and memes, and eventually they get pissed off because we won’t take them seriously

I recommend showing up for a laugh; come as someone satisfying their own curiosity who’s not taking any of this shit too seriously, but don’t engage. if any folks there are receptive to skepticism, they’ll find you (and likewise, if the crowd is too hostile to listen, you’ll find that out very quickly too)

[-] froztbyte@awful.systems 5 points 1 year ago

You don’t control the audience and you can’t predict what you’ll be asked about or engaged upon, so you can’t necessarily predict or prepare a full spectrum.

What you can do is to make an ahead of time decision on what you feel you can and cannot cover, and what response you’ll use if something that falls in the latter category comes up. You could defer engagement, refer them to different sources, explicitly state you’re not ready to engage on that subsection, etc. It’s not necessarily as full coverage as might be good otherwise, but you can choose not to step into traps.

And if someone continues down such an avenue despite you saying you choose not to, you just call them on it and shut it down.

Know when you can engage, know when you can walk away.

[-] froztbyte@awful.systems 2 points 1 year ago

@bitofhope did you end up going? how'd it go?

[-] L0rdMathias@sh.itjust.works 2 points 1 year ago

Think about debate night, think about the day after, the week after, and one year from then. Think about what will happen, what could happen, and what should happen at each time.

Do it again except this time from the perspective of you stayed home and didn't go.

Focus especially hard on your emotions and how these hypotheticals will make you feel.

Choose the future you want live.

Or if you aren't good at planning ahead commit to a coin flip. If you accept the result, you either got the one you wanted or truly didn't care, while If you hesitate and want to flip again then you've found the answer you don't want and should probably go with the other one.

[-] bitofhope@awful.systems 5 points 1 year ago

That works for general decision making. The reason I'm asking for input is that there might be risks or opportunities involved that I haven't fully considered. There are also people here who have more experience interacting with the AI alarmists' target audience and might be able to comment on their experiences or suggest strategies and talking points.

[-] evasive_chimpanzee@lemmy.world 2 points 1 year ago

I think it would probably be good to go to shed some light on what the movement actually is to some people. At the very surface, the whole point is "how do we do the most good?" which is a fair question to ask. For university students still finding their way in the world, I'd say it's a good thing that they are trying to find the answer. Many of the techy goals of people in that realm seems like cool scifi. It's only once you dig deeper that you see the true sinister nature of the people in the field.

They claim that through technology, they will be able to usher in a utopia where people don't have to work as much. Funny how they don't lobby for laws that would require technological advancements to benefit workers, not the owners. There's many examples throughout history, but one of the best is probably the cotton gin. It was created as a labor saving device to helpfully reduce/eliminate slavery, but all it did was make slavery far more profitable. That's what happened with an inventor trying to do the right thing. Most tech these days is not developed to benefit everyone.

It's no accident that the people claiming that AGI is a risk to humanity are also the ones trying hardest to get there. They are just a little scared of AGI because it could truly cause societal upheaval, and those at the top of a society have the most to lose in that situation. It's self preservation, not benevolence. The power structures of modern society are vital to their continued lives of extravagance. In the end, they all just want to accumulate wealth, not pay any taxes, and try to make themselves feel like a hero for doing it.

I'd really just say that the people that would be in that room with you all probably do have legitimately noble goals, so it's important not to treat them as an adversary. You aren't going to win anyone over if that's how you approach it. Just do some research, and make sure to focus on the impact of the actions of the EA people, not their stated goals

[-] bitofhope@awful.systems 5 points 1 year ago

They claim that through technology, they will be able to usher in a utopia where people don’t have to work as much. Funny how they don’t lobby for laws that would require technological advancements to benefit workers, not the owners.

This is a good point, but I think it's best to be careful with anything they might perceive at too overtly "political". It's one thing to argue why AI doomsday cultism is bad and another to advocate for fully automated luxury communism.

It’s no accident that the people claiming that AGI is a risk to humanity are also the ones trying hardest to get there. They are just a little scared of AGI because it could truly cause societal upheaval, and those at the top of a society have the most to lose in that situation. It’s self preservation, not benevolence. The power structures of modern society are vital to their continued lives of extravagance. In the end, they all just want to accumulate wealth, not pay any taxes, and try to make themselves feel like a hero for doing it.

I might be cynical, but this sounds like overselling AGI and not just because I don't believe we are anywhere close to creating anything I'd consider one.

I'm not looking to have a debate or take an adversarial position. If I am to go, I'll focus on making a case for why AI doom is an unrealistic sci-fi scenario, what actual AI risks we should worry about, why some people benefit from the doomer narrative and possibly touch on why Effective Altruism isn't a wholly benign movement. The point is only to give them the background so they can make their own decisions with healthy skepticism.

I don't assume students interested in rationality and charity work to be bad people or anything. Sneering and berating them right in their face would be counterproductive.

this post was submitted on 28 Sep 2023
14 points (100.0% liked)

SneerClub

983 readers
18 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 1 year ago
MODERATORS