[-] blakestacey@awful.systems 12 points 3 months ago

I ... just ... what?

He made up a whole society to be mad at.

[-] blakestacey@awful.systems 12 points 4 months ago

Why is it that religion as a whole is fine, but not this religion?

Me: This plant is poisonous.

You, a lesswrong brain genius: Plants are a vital part of the Earth's ecosystem. They make the oxygen that we breathe. You call yourself a vegetarian, and yet you have a problem with this plant. Why is it that plants as a whole are fine, but not this plant?

Me: This plant is poisonous.

[-] blakestacey@awful.systems 13 points 5 months ago

There's an "I am no man" joke in here somewhere that I am too tired to figure out.

[-] blakestacey@awful.systems 13 points 6 months ago

That Carl Shulman post from 2007 is hilarious.

After years spent studying existential risks, I concluded that the risk of an artificial intelligence with inadequately specified goals dominates. Attempts to create artificial intelligence can be expected to continue, and to become more likely to succeed in light of increased computing power, neuroscience, and intelligence-enhancements. Unless the programmers solve extremely difficult problems in both philosophy and computer science, such an intelligence might eliminate all utility within our future light-cone in the process of pursuing a poorly defined objective.

Accordingly, I invest my efforts into learning more about the relevant technologies and considerations, increasing my earnings capability (so as to deliver most of a large income to relevant expenditures), and developing logistical strategies to more effectively gather and expend resources on the problem of creating AI that promotes (astronomically) and preserves global welfare rather than extinguishing it.

Because the potential stakes are many orders of magnitude greater than relatively good conventional expenditures (vaccine and Green Revolution research), and the probability of disaster much more likely than for, e.g. asteroid impacts, utilitarians with even a very low initial estimate of the practicality of AI in coming decades should still invest significant energy in learning more about the risks and opportunities associated with it. (Having done so, I offer my assurance that this is worthwhile.) Note that for materialists the possibility of AI follows from the existence proof of the human brain, and that an AI able to redesign itself for greater intelligence and copy itself would have the power to determine the future of Earth-derived life.

I suggest beginning with the two articles below on existential risk, the first on relevant cognitive biases, and the second discussing the relation of AI to existential risk. Processing these arguments should provide sufficient reason for further study.

The "two articles below" are by Yudkowsky.

User "gaverick" replies,

Carl, I'm inclined to agree with you, but can you recommend a rigorous discussion of the existential risks posed by Unfriendly AI? I had read Yudkowsky's chapter on AI risks for Bostrom's bk (and some of his other SIAI essays & SL4 posts) but when I forward them to others, their informality fails to impress.

Shulman's response begins,

Have you read through Bostrom's work on the subject? Kurzweil has relevant info for computing power and brain imaging.

Ray mothersodding Kurzweil!

[-] blakestacey@awful.systems 12 points 1 year ago

Fun Blake fact: I was one bureaucratic technicality away from getting a literature minor to go along with my physics major. I didn't plan for that; we had a Byzantine set of course requirements that we had to meet by mixing and matching whatever electives were available, and somehow, the electives I took piled up to be almost enough for a lit minor. I would have had to take one more course on material written before some cutoff year — I think it was 1900 — but other than that, I had all the checkmarks. I probably could have argued my way to an exemption, since my professors liked me and the department would have gotten their numbers that little bit higher, but I didn't discover this until spring semester of my senior year, when I was already both incredibly busy and incredibly tired.

[-] blakestacey@awful.systems 13 points 1 year ago

Winning sentences of the day so far:

Conservapedia is 100% true and correct. Evidence: https://www.conservapedia.com/Garfield_(comic_strip)

Whoever wrote that deffo wants to fuck Nermal.

[-] blakestacey@awful.systems 12 points 1 year ago

🎶 we don't need no water let the motherfucker burn 🎶

[-] blakestacey@awful.systems 12 points 2 years ago

You are not worth responding to. Goodbye.

[-] blakestacey@awful.systems 12 points 2 years ago

oh lordy, there's a whole post

Why did evolution give most males so much testosterone instead of making low-T nerds? Obviously testosterone makes you horny and buff.

"Compared to me, 78% of the human male population are low-T betas" —Hbomberguy

[-] blakestacey@awful.systems 12 points 2 years ago* (last edited 2 years ago)

Back in 2009, Yud asked who he should do a "bloggingheads" dialog with. Two people suggested Langan.

And one suggested Scott Adams.

[-] blakestacey@awful.systems 12 points 2 years ago* (last edited 2 years ago)

Transcript of screenshot:

Between the end of 2002 and beginning of 2003, Dawn distanced herself from Singer. (Complaint, ¶ 44.) In May 2003, Singer asked Dawn to work with him on a piece he had been asked to write for the Los Angeles Times, for which she would receive co-writing credit. (Id. at ¶ 45.) From 2002 through 2020, all of Singer's female co-authors were women with whom Singer had been sexually involved, or to whom he had made clear his sexual interest. (Id. at ¶¶ 46, 47.) Despite a pattern of professional reward for sexual affection, Singer wrote to Dawn that he believed he could only be accused of anything if an angry ex "made something up" or "had a false memory." (Id. at ¶ 49.) Dawn came to understand that she too would be rewarded for maintaining an affectionate relationship with Singer, with offers of prestigious work, and would lose those offers without such expressions of warmth. (Id. at ¶ 50.)

Dawn and Singer became sexually involved again when working on the Los Angeles Times op-ed together, with Dawn agreeing to be part of Singer's "harem" as long as "she was his favorite, the lead in his orchestra, as he called it." (Complaint, ¶ 52.) Dawn wondered if she should be trying to have a child with her partner and was reminded by Singer that if she did, it would negatively affect her figure and would interfere with their affair. (Id. at ¶ 53.) In 2003, Singer told Dawn that while he still wished to be sexually involved with her, she had been replaced as the main recipient of his affections by a woman he had met at a conference in Europe and who was 10 years younger than Dawn and who was married. (Id. at ¶¶ 54-56.) Singer acknowledged the "high risk" that the affair would destroy the woman's marriage. (Id. at ¶ 57.) Dawn wrote numerous emails to Singer making it clear that she was emotionally shattered by the turn of events. (Id. at ¶ 59.)

Feeling old compared to her younger replacement, Dawn had a facelift in 2004, at the age of 41. (Complaint, ¶ 70.) Her face became infected, and she was ill for weeks. (Id. at ¶ 71.) Dawn shared news of the long-term affair with her partner, now a partner of four years. (Id. at ¶ 74.) The relationship was strained beyond repair and plans for marriage between Dawn and her partner were put on hold. (Id. at ¶¶ 75, 76.) Before learning about the affair between

nitter link

[-] blakestacey@awful.systems 12 points 2 years ago

split screen between "how to win the culture race" and "a short story about sex tourism in the Philippines"

view more: ‹ prev next ›

blakestacey

joined 2 years ago
MODERATOR OF