[-] blakestacey@awful.systems 17 points 2 weeks ago* (last edited 2 weeks ago)

jhbadger:

As Adam Becker shows in his book, EAs started out being reasonable "give to charity as much as you can, and research which charities do the most good" but have gotten into absurdities like "it is more important to fund rockets than help starving people or prevent malaria because maybe an asteroid will hit the Earth, killing everyone, starving or not".

I haven't read Becker's book and probably won't spend the time to do so. But if this is an accurate summary, it's a bad sign for that book, because plenty of them were bonkers all along.

As journalists and scholars scramble to account for this ‘new’ version of EA—what happened to the bednets, and why are Effective Altruists (EAs) so obsessed with AI?—they inadvertently repeat an oversimplified and revisionist history of the EA movement. It goes something like this: EA was once lauded as a movement of frugal do-gooders donating all their extra money to buy anti-malarial bednets for the poor in sub-Saharan Africa; but now, a few EAs have taken their utilitarian logic to an extreme level, and focus on ‘longtermism’, the idea that if we wish to do the most good, our efforts ought to focus on making sure the long-term future goes well; this occurred in tandem with a dramatic influx of funding from tech scions of Silicon Valley, redirecting EA into new cause areas like the development of safe artificial intelligence (‘AI-safety’ and ‘AI-alignment’) and biosecurity/pandemic preparedness, couched as part of a broader mission to reduce existential risks (‘x-risks’) and ‘global catastrophic risks’ that threaten humanity’s future. This view characterizes ‘longtermism’ as a ‘recent outgrowth’ (Ongweso Jr., 2022) or even breakaway ‘sect’ (Aleem, 2022) that does not represent authentic EA (see, e.g., Hossenfelder, 2022; Lenman, 2022; Pinker, 2022; Singer & Wong, 2019). EA’s shift from anti-malarial bednets and deworming pills to AI-safety/x-risk is portrayed as mission-drift, given wings by funding and endorsements from Silicon Valley billionaires like Elon Musk and Sam Bankman-Fried (see, e.g., Bajekal, 2022; Fisher, 2022; Lewis-Kraus, 2022; Matthews, 2022; Visram, 2022). A crucial turning point in this evolution, the story goes, includes EAs encountering the ideas of transhumanist philosopher Nick Bostrom of Oxford University’s Future of Humanity Institute (FHI), whose arguments for reducing x-risks from AI and biotechnology (Bostrom, 2002, 2003, 2013) have come to dominate EA thinking (see, e.g., Naughton, 2022; Ziatchik, 2022).

This version of events gives the impression that EA’s concerns about x-risk, AI, and ‘longtermism’ emerged out of EA’s rigorous approach to evaluating how to do good, and has only recently been embraced by the movement’s leaders. MacAskill’s publicity campaign for WWOTF certainly reinforces this perception. Yet, from the formal inception of EA in 2012 (and earlier) the key figures and intellectual architects of the EA movement were intensely focused on promoting the suite of causes that now fly under the banner of ‘longtermism’, particularly AI-safety, x-risk/global catastrophic risk reduction, and other components of the transhumanist agenda such as human enhancement, mind uploading, space colonization, prediction and forecasting markets, and life extension biotechnologies.

To give just a few examples: Toby Ord, the co-founder of GWWC and CEA, was actively collaborating with Bostrom by 2004 (Bostrom & Ord, 2004),18 and was a researcher at Bostrom’s Future of Humanity Institute (FHI) in 2007 (Future of Humanity Institute, 2007) when he came up with the idea for GWWC; in fact, Bostrom helped create GWWC’s first logo (EffectiveAltruism.org, 2016). Jason Matheny, whom Ord credits with introducing him to global public health metrics as a means for comparing charity effectiveness (Matthews, 2022), was also working to promote Bostrom’s x-risk agenda (Matheny, 2006, 2009), already framing it as the most cost-effective way to save lives through donations in 2006 (User: Gaverick [Jason Gaverick Matheny], 2006). MacAskill approvingly included x-risk as a cause area when discussing his organizations on Felificia and LessWrong (Crouch [MacAskill], 2010, 2012a, 2012b, 2012c, 2012e), and x-risk and transhumanism were part of 80K’s mission from the start (User: LadyMorgana, 2011). Pablo Stafforini, one of the key intellectual architects of EA ‘behind-the-scenes’, initially on Felificia (Stafforini, 2012a, 2012b, 2012c) and later as MacAskill’s research assistant at CEA for Doing Good Better and other projects (see organizational chart in Centre for Effective Altruism, 2017a; see the section entitled “ghostwriting” in Knutsson, 2019), was deeply involved in Bostrom’s transhumanist project in the early 2000s, and founded the Argentine chapter of Bostrom’s World Transhumanist Association in 2003 (Transhumanismo. org, 2003, 2004). Rob Wiblin, who was CEA’s executive director from 2013-2015 prior to moving to his current role at 80K, blogged about Bostrom and Yudkowksy’s x-risk/AI-safety project and other transhumanist themes starting in 2009 (Wiblin, 2009a, 2009b, 2010a, 2010b, 2010c, 2010d, 2012). In 2007, Carl Shulman (one of the most influential thought-leaders of EA, who oversees a $5,000,000 discretionary fund at CEA) articulated an agenda that is virtually identical to EA’s ‘longtermist’ agenda today in a Felificia post (Shulman, 2007). Nick Beckstead, who co-founded and led the first US chapter of GWWC in 2010, was also simultaneously engaging with Bostrom’s x-risk concept (Beckstead, 2010). By 2011, Beckstead’s PhD work was centered on Bostrom’s x-risk project: he entered an extract from the work-in-progress, entitled “Global Priority Setting and Existential Risk: Crucial Ethical Considerations” (Beckstead, 2011b) to FHI’s “Crucial Considerations” writing contest (Future of Humanity Institute, 2011), where it was the winning submission (Future of Humanity institute, 2012). His final dissertation, entitled On the Overwhelming Importance of Shaping the Far Future (Beckstead, 2013) is now treated as a foundational ‘longtermist’ text by EAs.

Throughout this period, however, EA was presented to the general public as an effort to end global poverty through effective giving, inspired by Peter Singer. Even as Beckstead was busy writing about x-risk and the long-term future in his own work, in the media he presented himself as focused on ending global poverty by donating to charities serving the distant poor (Beckstead & Lee, 2011; Chapman, 2011; MSNBC, 2010). MacAskill, too, presented himself as doggedly committed to ending global poverty....

(Becker's previous book, about the interpretation of quantum mechanics, irritated me. It recapitulated earlier pop-science books while introducing historical and technical errors, like getting the basic description of the EPR thought-experiment wrong, and butchering the biography of Grete Hermann while acting self-righteous about sexist men overlooking her accomplishments. See previous rant.)

[-] blakestacey@awful.systems 17 points 3 months ago

(Ozymandias voice) "I fully commit to acausal theory twenty-five minutes from now."

[-] blakestacey@awful.systems 17 points 4 months ago

We are on the verge of a “cultural mass extinction.” This will dramatically increase the homogeneity of our species and as such lower the prevalence of orthanganal perspectives that could generate solutions to social problems which are not apparent to surviving cultures.

zoom and enhance

orthanganal

I shouldn't make fun of uncorrected typos. But I will.

[-] blakestacey@awful.systems 17 points 7 months ago

"Consider it from the perspective of someone who does not exist and therefore has no preferences. Who would they pick?"

[-] blakestacey@awful.systems 17 points 11 months ago

Open post in private browser window, click profile... Oh, wow, he is.

As for carving out my own little space online where dissent is purged, everyone from Steve Sailer to Zero HP Lovecraft to BAP follows me on Twitter, where I openly and emphatically argue against them.

Dude. Not healthy.

I feel like I should quote some proverb about not mud-wrestling a pig or debating a creationist. This is like going out of your way to make the Nazis feel at home in your neighborhood bar.

[-] blakestacey@awful.systems 17 points 1 year ago* (last edited 1 year ago)

As Robert Evans put it:

If you are a normal, decent, well-socialized human being, you probably have not heard about Harry Potter and the Methods of Rationality. Actually explaining what this thing is will have to happen in several different stages. But I should start by telling you this re-write of the first Harry Potter book is around 660,000 words long.

The entire Lord of the Rings series, including The Hobbit, comes in at a little less than 580,000 words.

The audiobook is 67 hours long.

[-] blakestacey@awful.systems 17 points 1 year ago

Graham takes the nebulous concept of "best" and, at great length and with great effort, fails to bring clarity.

[-] blakestacey@awful.systems 17 points 1 year ago

"It would be hard to abuse a law that forcibly sterilized everybody with an IQ under 90 provided that the person scored that low on an objective test blindly graded." —Richard Hanania

[-] blakestacey@awful.systems 17 points 1 year ago

"Honey, it's time we had the conversation."

"You mean, are we getting serious?"

"No."

"The 'is this going to be exclusive' talk?"

"No."

"The 'do you ever want kids someday' talk?"

"No."

"Moving in together? Making plans to meet my parents?"

"No, I need to tell you that I would trade up if I found someone 37% better than you, and now you need to tell me your corresponding percentage."

[a deadly silence falls]

[-] blakestacey@awful.systems 16 points 2 years ago

Wouldn't a Hallmark movie be about a young woman from the big city who finds love and fulfillment when she has to return to her small hometown and manage the local bakery, including the wacky antics of its mixed-sex staff?

[-] blakestacey@awful.systems 17 points 2 years ago

Makery: The bakery ... for straight men! Now with scones in monster truck and shark testosterone flavors! GRAAARRR

view more: ‹ prev next ›

blakestacey

joined 2 years ago
MODERATOR OF