[-] blakestacey@awful.systems 11 points 2 days ago

That Carl Shulman post from 2007 is hilarious.

After years spent studying existential risks, I concluded that the risk of an artificial intelligence with inadequately specified goals dominates. Attempts to create artificial intelligence can be expected to continue, and to become more likely to succeed in light of increased computing power, neuroscience, and intelligence-enhancements. Unless the programmers solve extremely difficult problems in both philosophy and computer science, such an intelligence might eliminate all utility within our future light-cone in the process of pursuing a poorly defined objective.

Accordingly, I invest my efforts into learning more about the relevant technologies and considerations, increasing my earnings capability (so as to deliver most of a large income to relevant expenditures), and developing logistical strategies to more effectively gather and expend resources on the problem of creating AI that promotes (astronomically) and preserves global welfare rather than extinguishing it.

Because the potential stakes are many orders of magnitude greater than relatively good conventional expenditures (vaccine and Green Revolution research), and the probability of disaster much more likely than for, e.g. asteroid impacts, utilitarians with even a very low initial estimate of the practicality of AI in coming decades should still invest significant energy in learning more about the risks and opportunities associated with it. (Having done so, I offer my assurance that this is worthwhile.) Note that for materialists the possibility of AI follows from the existence proof of the human brain, and that an AI able to redesign itself for greater intelligence and copy itself would have the power to determine the future of Earth-derived life.

I suggest beginning with the two articles below on existential risk, the first on relevant cognitive biases, and the second discussing the relation of AI to existential risk. Processing these arguments should provide sufficient reason for further study.

The "two articles below" are by Yudkowsky.

User "gaverick" replies,

Carl, I'm inclined to agree with you, but can you recommend a rigorous discussion of the existential risks posed by Unfriendly AI? I had read Yudkowsky's chapter on AI risks for Bostrom's bk (and some of his other SIAI essays & SL4 posts) but when I forward them to others, their informality fails to impress.

Shulman's response begins,

Have you read through Bostrom's work on the subject? Kurzweil has relevant info for computing power and brain imaging.

Ray mothersodding Kurzweil!

[-] blakestacey@awful.systems 17 points 2 days ago* (last edited 2 days ago)

jhbadger:

As Adam Becker shows in his book, EAs started out being reasonable "give to charity as much as you can, and research which charities do the most good" but have gotten into absurdities like "it is more important to fund rockets than help starving people or prevent malaria because maybe an asteroid will hit the Earth, killing everyone, starving or not".

I haven't read Becker's book and probably won't spend the time to do so. But if this is an accurate summary, it's a bad sign for that book, because plenty of them were bonkers all along.

As journalists and scholars scramble to account for this ‘new’ version of EA—what happened to the bednets, and why are Effective Altruists (EAs) so obsessed with AI?—they inadvertently repeat an oversimplified and revisionist history of the EA movement. It goes something like this: EA was once lauded as a movement of frugal do-gooders donating all their extra money to buy anti-malarial bednets for the poor in sub-Saharan Africa; but now, a few EAs have taken their utilitarian logic to an extreme level, and focus on ‘longtermism’, the idea that if we wish to do the most good, our efforts ought to focus on making sure the long-term future goes well; this occurred in tandem with a dramatic influx of funding from tech scions of Silicon Valley, redirecting EA into new cause areas like the development of safe artificial intelligence (‘AI-safety’ and ‘AI-alignment’) and biosecurity/pandemic preparedness, couched as part of a broader mission to reduce existential risks (‘x-risks’) and ‘global catastrophic risks’ that threaten humanity’s future. This view characterizes ‘longtermism’ as a ‘recent outgrowth’ (Ongweso Jr., 2022) or even breakaway ‘sect’ (Aleem, 2022) that does not represent authentic EA (see, e.g., Hossenfelder, 2022; Lenman, 2022; Pinker, 2022; Singer & Wong, 2019). EA’s shift from anti-malarial bednets and deworming pills to AI-safety/x-risk is portrayed as mission-drift, given wings by funding and endorsements from Silicon Valley billionaires like Elon Musk and Sam Bankman-Fried (see, e.g., Bajekal, 2022; Fisher, 2022; Lewis-Kraus, 2022; Matthews, 2022; Visram, 2022). A crucial turning point in this evolution, the story goes, includes EAs encountering the ideas of transhumanist philosopher Nick Bostrom of Oxford University’s Future of Humanity Institute (FHI), whose arguments for reducing x-risks from AI and biotechnology (Bostrom, 2002, 2003, 2013) have come to dominate EA thinking (see, e.g., Naughton, 2022; Ziatchik, 2022).

This version of events gives the impression that EA’s concerns about x-risk, AI, and ‘longtermism’ emerged out of EA’s rigorous approach to evaluating how to do good, and has only recently been embraced by the movement’s leaders. MacAskill’s publicity campaign for WWOTF certainly reinforces this perception. Yet, from the formal inception of EA in 2012 (and earlier) the key figures and intellectual architects of the EA movement were intensely focused on promoting the suite of causes that now fly under the banner of ‘longtermism’, particularly AI-safety, x-risk/global catastrophic risk reduction, and other components of the transhumanist agenda such as human enhancement, mind uploading, space colonization, prediction and forecasting markets, and life extension biotechnologies.

To give just a few examples: Toby Ord, the co-founder of GWWC and CEA, was actively collaborating with Bostrom by 2004 (Bostrom & Ord, 2004),18 and was a researcher at Bostrom’s Future of Humanity Institute (FHI) in 2007 (Future of Humanity Institute, 2007) when he came up with the idea for GWWC; in fact, Bostrom helped create GWWC’s first logo (EffectiveAltruism.org, 2016). Jason Matheny, whom Ord credits with introducing him to global public health metrics as a means for comparing charity effectiveness (Matthews, 2022), was also working to promote Bostrom’s x-risk agenda (Matheny, 2006, 2009), already framing it as the most cost-effective way to save lives through donations in 2006 (User: Gaverick [Jason Gaverick Matheny], 2006). MacAskill approvingly included x-risk as a cause area when discussing his organizations on Felificia and LessWrong (Crouch [MacAskill], 2010, 2012a, 2012b, 2012c, 2012e), and x-risk and transhumanism were part of 80K’s mission from the start (User: LadyMorgana, 2011). Pablo Stafforini, one of the key intellectual architects of EA ‘behind-the-scenes’, initially on Felificia (Stafforini, 2012a, 2012b, 2012c) and later as MacAskill’s research assistant at CEA for Doing Good Better and other projects (see organizational chart in Centre for Effective Altruism, 2017a; see the section entitled “ghostwriting” in Knutsson, 2019), was deeply involved in Bostrom’s transhumanist project in the early 2000s, and founded the Argentine chapter of Bostrom’s World Transhumanist Association in 2003 (Transhumanismo. org, 2003, 2004). Rob Wiblin, who was CEA’s executive director from 2013-2015 prior to moving to his current role at 80K, blogged about Bostrom and Yudkowksy’s x-risk/AI-safety project and other transhumanist themes starting in 2009 (Wiblin, 2009a, 2009b, 2010a, 2010b, 2010c, 2010d, 2012). In 2007, Carl Shulman (one of the most influential thought-leaders of EA, who oversees a $5,000,000 discretionary fund at CEA) articulated an agenda that is virtually identical to EA’s ‘longtermist’ agenda today in a Felificia post (Shulman, 2007). Nick Beckstead, who co-founded and led the first US chapter of GWWC in 2010, was also simultaneously engaging with Bostrom’s x-risk concept (Beckstead, 2010). By 2011, Beckstead’s PhD work was centered on Bostrom’s x-risk project: he entered an extract from the work-in-progress, entitled “Global Priority Setting and Existential Risk: Crucial Ethical Considerations” (Beckstead, 2011b) to FHI’s “Crucial Considerations” writing contest (Future of Humanity Institute, 2011), where it was the winning submission (Future of Humanity institute, 2012). His final dissertation, entitled On the Overwhelming Importance of Shaping the Far Future (Beckstead, 2013) is now treated as a foundational ‘longtermist’ text by EAs.

Throughout this period, however, EA was presented to the general public as an effort to end global poverty through effective giving, inspired by Peter Singer. Even as Beckstead was busy writing about x-risk and the long-term future in his own work, in the media he presented himself as focused on ending global poverty by donating to charities serving the distant poor (Beckstead & Lee, 2011; Chapman, 2011; MSNBC, 2010). MacAskill, too, presented himself as doggedly committed to ending global poverty....

(Becker's previous book, about the interpretation of quantum mechanics, irritated me. It recapitulated earlier pop-science books while introducing historical and technical errors, like getting the basic description of the EPR thought-experiment wrong, and butchering the biography of Grete Hermann while acting self-righteous about sexist men overlooking her accomplishments. See previous rant.)

[-] blakestacey@awful.systems 24 points 2 days ago

astrange:

They're members of a religion which says that if you do math in your head the right way you'll be correct about everything, and so they think they're correct about everything.

They also secondarily believe everyone has an IQ which is their DBZ power level; they believe anything they see that has math in it, and IQ is math, so they believe anything they see about IQ. So if you avoid trying to find out your own IQ you can just believe it's really high and then you're good.

Unfortunately this lead them to the conclusion that computers have more IQ than them and so would automatically win any intellectual DBZ laser beam fight against them / enslave them / take over the world.

[-] blakestacey@awful.systems 15 points 3 days ago

My Grand Unified Theory of Scott Aaronson is that he doesn't have a theory of mind. On subjects far less incendiary than Zionism, he simply fails to recognize that people who share his background or interests can think differently than he does.

61

"TheFutureIsDesigned" bluechecks thusly:

You: takes 2 hours to read 1 book

Me: take 2 minutes to think of precisely the information I need, write a well-structured query, tell my agent AI to distribute it to the 17 models I've selected to help me with research, who then traverse approximately 1 million books, extract 17 different versions of the information I'm looking for, which my overseer agent then reviews, eliminates duplicate points, highlights purely conflicting ones for my review, and creates a 3-level summary.

And then I drink coffee for 58 minutes.

We are not the same.

For bonus points:

I want to live in the world of Hyperion, Ringworld, Foundation, and Dune.

You know, Dune.

(Via)

[-] blakestacey@awful.systems 58 points 1 week ago* (last edited 1 week ago)

The New York Times treats him as an expert: "Eliezer Yudkowsky, a decision theorist and an author of a forthcoming book". He's an Internet rando who has yammered about decision theory, not an actual theorist! He wrote fanfic that claimed to teach rational thinking while getting high-school biology wrong. His attempt to propose a new decision theory was, last I checked, never published in a peer-reviewed journal, and in trying to check again I discovered that it's so obscure it was deleted from Wikipedia.

https://en.wikipedia.org/wiki/Wikipedia:Articles_for_deletion/Functional_Decision_Theory

To recapitulate my sneer from an earlier thread, the New York Times respects actual decision theorists so little, it's like the whole academic discipline is trans people or something.

[-] blakestacey@awful.systems 32 points 3 months ago

Hashemi and Hall (2020) published research demonstrating that convolutional neural networks could distinguish between "criminal" and "non-criminal" facial images with a reported accuracy of 97% on their test set. While this paper was later retracted for ethical concerns rather than methodological flaws,

That's not really a sentence that should begin with "While", now, is it?

it highlighted the potential for facial analysis to extend beyond physical attributes into behavior prediction.

What the fuck is wrong with you?

28
submitted 6 months ago* (last edited 6 months ago) by blakestacey@awful.systems to c/sneerclub@awful.systems

The UCLA news office boasts, "Comparative lit class will be first in Humanities Division to use UCLA-developed AI system".

The logic the professor gives completely baffles me:

"Normally, I would spend lectures contextualizing the material and using visuals to demonstrate the content. But now all of that is in the textbook we generated, and I can actually work with students to read the primary sources and walk them through what it means to analyze and think critically."

I'm trying to parse that. Really and truly I am. But it just sounds like this: "Normally, I would [do work]. But now, I can actually [do the same work]."

I mean, was this person somehow teaching comparative literature in a way that didn't involve reading the primary sources and, I'unno, comparing them?

The sales talk in the news release is really going all in selling that undercoat.

Now that her teaching materials are organized into a coherent text, another instructor could lead the course during the quarters when Stahuljak isn’t teaching — and offer students a very similar experience. And with AI-generated lesson plans and writing exercises for TAs, students in each discussion section can be assured they’re receiving comparable instruction to those in other sections.

Back in my day, we called that "having a book" and "writing a lesson plan".

Yeah, going from lecture notes and slides to something shaped like a book is hard. I know because I've fuckin' done it. And because I put in the work, I got the benefit of improving my own understanding by refining my presentation. As the old saying goes, "Want to learn a subject? Teach it." Moreover, doing the work means that I can take a little pride in the result. Serving slop is the cafeteria's job.

(Hat tip.)

[-] blakestacey@awful.systems 40 points 9 months ago* (last edited 9 months ago)

an hackernews:

a high correlation between intelligence and IQ

motherfuckers out here acting like "intelligence" is sufficiently well-defined that a correlation between it and anything else can be computed

intelligence can be reasonably defined as "knowledge and skills to be successful in life, i.e. have higher-than-average income"

eat a bag of dicks

23

So, here I am, listening to the Cosmos soundtrack and strangely not stoned. And I realize that it's been a while since we've had a random music recommendation thread. What's the musical haps in your worlds, friends?

[-] blakestacey@awful.systems 34 points 10 months ago

shot:

The upper bound for how long to pause AI is only a century, because “farming” (artificially selecting) higher-IQ humans could probably create competent IQ 200 safety researchers.

It just takes C-sections to enable huge heads and medical science for other issues that come up.

chaser:

Indeed, the bad associations ppl have with eugenics are from scenarios much less casual than this one

going full "villain in a Venture Bros. episode who makes the Monarch feel good by comparison":

Sure, I don't think it's crazy to claim women would be lining up to screw me in that scenario

[-] blakestacey@awful.systems 33 points 11 months ago

Some of Kurzweil's predictions in 1999 about 2019:

A $1,000 computing device is now approximately equal to the computational ability of the human brain. Computers are now largely invisible and are embedded everywhere. Three-dimensional virtual-reality displays, embedded in glasses and contact lenses, provide the primary interface for communication with other persons, the Web, and virtual reality. Most interaction with computing is through gestures and two-way natural-language spoken communication. Realistic all-encompassing visual, auditory, and tactile environments enable people to do virtually anything with anybody regardless of physical proximity. People are beginning to have relationships with automated personalities as companions, teachers, caretakers, and lovers.

Also:

Three‐dimensional nanotube lattices are now a prevalent form of computing circuitry.

And:

Autonomous nanoengineered machines can control their own mobility and include significant computational engines.

And:

ʺPhoneʺ calls routinely include high‐resolution three‐dimensional images projected through the direct‐eye displays and auditory lenses. Three‐dimensional holography displays have also emerged. In either case, users feel as if they are physically near the other person. The resolution equals or exceeds optimal human visual acuity. Thus a person can be fooled as to whether or not another person is physically present or is being projected through electronic communication.

And:

The all‐enveloping tactile environment is now widely available and fully convincing. Its resolution equals or exceeds that of human touch and can simulate (and stimulate) all of the facets of the tactile sense, including the sensing of pressure, temperature, textures, and moistness. Although the visual and auditory aspects of virtual reality involve only devices you have on or in your body (the direct‐eye lenses and auditory lenses), the ʺtotal touchʺ haptic environment requires entering a virtual reality booth. These technologies are popular for medical examinations, as well as sensual and sexual interactions with other human partners or simulated partners. In fact, it is often the preferred mode of interaction, even when a human partner is nearby, due to its ability to enhance both experience and safety.

And:

Automated driving systems have been found to be highly reliable and have now been installed in nearly all roads.

And:

The type of artistic and entertainment product in greatest demand (as measured by revenue generated) continues to be virtual‐experience software, which ranges from simulations of ʺrealʺ experiences to abstract environments with little or no corollary in the physical world.

And:

The expected life span, which, as a (1780 through 1900) and the first phase result of the first Industrial Revolution of the second (the twentieth century), almost doubled from less than forty, has now substantially increased again, to over one hundred.

[-] blakestacey@awful.systems 41 points 11 months ago* (last edited 11 months ago)

Some of Kurzweil's predictions in 1999 about 2009:

  • “Unused computes on the Internet are harvested, creating … human brain hardware capacity.”
  • “The online chat rooms of the late 1990s have been replaced with virtual environments…with full visual realism.”
  • “Interactive brain-generated music … is another popular genre.”
  • “the underclass is politically neutralized through public assistance and the generally high level of affluence”
  • “Diagnosis almost always involves collaboration between a human physician and a … expert system.”
  • “Humans are generally far removed from the scene of battle.”
  • “Despite occasional corrections, the ten years leading up to 2009 have seen continuous economic expansion”
  • “Cables are disappearing.”
  • “grammar checkers are now actually useful”
  • “Intelligent roads are in use, primarily for long-distance travel.”
  • “The majority of text is created using continuous speech recognition (CSR) software”
  • “Autonomous nanoengineered machines … have been demonstrated and include their own computational controls.”
[-] blakestacey@awful.systems 38 points 1 year ago

Carl T. Bergstrom, 13 February 2023:

Meta. OpenAI. Google.

Your AI chatbot is not hallucinating.

It's bullshitting.

It's bullshitting, because that's what you designed it to do. You designed it to generate seemingly authoritative text "with a blatant disregard for truth and logical coherence," i.e., to bullshit.

Me, 2 February 2023:

I confess myself a bit baffled by people who act like "how to interact with ChatGPT" is a useful classroom skill. It's not a word processor or a spreadsheet; it doesn't have documented, well-defined, reproducible behaviors. No, it's not remotely analogous to a calculator. Calculators are built to be right, not to sound convincing. It's a bullshit fountain. Stop acting like you're a waterbender making emotive shapes by expressing your will in the medium of liquid bullshit. The lesson one needs about a bullshit fountain is not to swim in it.

19

a lesswrong: 47-minute read extolling the ambition and insights of Christopher Langan's "CTMU"

a science blogger back in the day: not so impressed

[I]t’s sort of like saying “I’m going to fix the sink in my bathroom by replacing the leaky washer with the color blue”, or “I’m going to fly to the moon by correctly spelling my left leg.”

Langan, incidentally, is a 9/11 truther, a believer in the "white genocide" conspiracy theory and much more besides.

18

In which a man disappearing up his own asshole somehow fails to be interesting.

[-] blakestacey@awful.systems 35 points 2 years ago

Feynman had a story about trying to read somebody's paper before a grand interdisciplinary symposium. As he told it, he couldn't get through the jargon, until he stopped and tried to translate just one sentence. He landed on a line like, "The individual member of the social community often receives information through visual, symbolic channels." And after a lot of crossing-out, he reduced that to "People read."

Yud, who idolizes Feynman above all others:

I also remark that the human equivalent of a utility function, not that we actually have one, often revolves around desires whose frustration produces pain.

Ah. People don't like to hurt.

6
submitted 2 years ago* (last edited 2 years ago) by blakestacey@awful.systems to c/sneerclub@awful.systems

Flashback time:

One of the most important and beneficial trainings I ever underwent as a young writer was trying to script a comic. I had to cut down all of my dialogue to fit into speech bubbles. I was staring closely at each sentence and striking out any word I could.

"But then I paid for Twitter!"

6

AI doctors will revolutionize medicine! You'll go to a service hosted in Thailand that can't take credit cards, and pay in crypto, to get a correct diagnosis. Then another VISA-blocked AI will train you in following a script that will get a human doctor to give you the right diagnosis, without tipping that doctor off that you're following a script; so you can get the prescription the first AI told you to get.

Can't get mifepristone or puberty blockers? Just have a chatbot teach you how to cast Persuasion!

1

Yudkowsky writes,

How can Effective Altruism solve the meta-level problem where almost all of the talented executives and ops people were in 1950 and now they're dead and there's fewer and fewer surviving descendants of their heritage every year and no blog post I can figure out how to write could even come close to making more people being good executives?

Because what EA was really missing is collusion to hide the health effects of tobacco smoking.

1

Aella:

Maybe catcalling isn't that bad? Maybe the demonizing of catcalling is actually racist, since most men who catcall are black

Quarantine Goth Ms. Frizzle (@spookperson):

your skull is full of wet cat food

2

Last summer, he announced the Stanford AI Alignment group (SAIA) in a blog post with a diagram of a tree representing his plan. He’d recruit a broad group of students (the soil) and then “funnel” the most promising candidates (the roots) up through the pipeline (the trunk).

See, it's like marketing the idea, in a multilevel way

2

Steven Pinker tweets thusly:

My friend & Harvard colleague Howard Gardner, offers a thoughtful critique of my book Rationality -- but undermines his cause, as all skeptics of rationality must do, by using rationality to make it.

"My colleague and fellow esteemed gentleman of Harvard neglects to consider the premise that I am rubber and he is glue."

1

In the far-off days of August 2022, Yudkowsky said of his brainchild,

If you think you can point to an unnecessary sentence within it, go ahead and try. Having a long story isn't the same fundamental kind of issue as having an extra sentence.

To which MarxBroshevik replied,

The first two sentences have a weird contradiction:

Every inch of wall space is covered by a bookcase. Each bookcase has six shelves, going almost to the ceiling.

So is it "every inch", or are the bookshelves going "almost" to the ceiling? Can't be both.

I've not read further than the first paragraph so there's probably other mistakes in the book too. There's kind of other 'mistakes' even in the first paragraph, not logical mistakes as such, just as an editor I would have... questions.

And I elaborated:

I'm not one to complain about the passive voice every time I see it. Like all matters of style, it's a choice that depends upon the tone the author desires, the point the author wishes to emphasize, even the way a character would speak. ("Oh, his throat was cut," Holmes concurred, "but not by his own hand.") Here, it contributes to a staid feeling. It emphasizes the walls and the shelves, not the books. This is all wrong for a story that is supposed to be about the pleasures of learning, a story whose main character can't walk past a bookstore without going in. Moreover, the instigating conceit of the fanfic is that their love of learning was nurtured, rather than neglected. Imagine that character, their family, their family home, and step into their library. What do you see?

Books — every wall, books to the ceiling.

Bam, done.

This is the living-room of the house occupied by the eminent Professor Michael Verres-Evans,

Calling a character "the eminent Professor" feels uncomfortably Dan Brown.

and his wife, Mrs. Petunia Evans-Verres, and their adopted son, Harry James Potter-Evans-Verres.

I hate the kid already.

And he said he wanted children, and that his first son would be named Dudley. And I thought to myself, what kind of parent names their child Dudley Dursley?

Congratulations, you've noticed the name in a children's book that was invented to sound stodgy and unpleasant. (In The Chocolate Factory of Rationality, a character asks "What kind of a name is 'Wonka' anyway?") And somehow you're trying to prove your cleverness and superiority over canon by mocking the name that was invented for children to mock. Of course, the Dursleys were also the start of Rowling using "physically unsightly by her standards" to indicate "morally evil", so joining in with that mockery feels ... It's aged badly, to be generous.

Also, is it just the people I know, or does having a name picked out for a child that far in advance seem a bit unusual? Is "Dudley" a name with history in his family — the father he honored but never really knew? His grandfather who died in the War? If you want to tell a grown-up story, where people aren't just named the way they are because those are names for children to laugh at, then you have to play by grown-up rules of characterization.

The whole stretch with Harry pointing out they can ask for a demonstration of magic is too long. Asking for proof is the obvious move, but it's presented as something only Harry is clever enough to think of, and as the end of a logic chain.

"Mum, your parents didn't have magic, did they?" [...] "Then no one in your family knew about magic when Lily got her letter. [...] If it's true, we can just get a Hogwarts professor here and see the magic for ourselves, and Dad will admit that it's true. And if not, then Mum will admit that it's false. That's what the experimental method is for, so that we don't have to resolve things just by arguing."

Jesus, this kid goes around with L's theme from Death Note playing in his head whenever he pours a bowl of breakfast crunchies.

Always Harry had been encouraged to study whatever caught his attention, bought all the books that caught his fancy, sponsored in whatever maths or science competitions he entered. He was given anything reasonable that he wanted, except, maybe, the slightest shred of respect.

Oh, sod off, you entitled little twit; the chip on your shoulder is bigger than you are. Your parents buy you college textbooks on physics instead of coloring books about rocketships, and you think you don't get respect? Because your adoptive father is incredulous about the existence of, let me check my notes here, literal magic? You know, the thing which would upend the body of known science, as you will yourself expound at great length.

"Mum," Harry said. "If you want to win this argument with Dad, look in chapter two of the first book of the Feynman Lectures on Physics.

Wesley Crusher would shove this kid into a locker.

view more: next ›

blakestacey

joined 2 years ago
MODERATOR OF