1
47
2
30

Mother Jones has a new report about Jordan Lasker:

A Reddit account named Faliceer, which posted highly specific biographical details that overlapped with Lasker’s offline life and which a childhood friend of Lasker’s believes he was behind, wrote in 2016, “I actually am a Jewish White Supremacist Nazi.” The Reddit comment, which has not been previously reported, is one of thousands of now-deleted posts from the Faliceer account obtained by Mother Jones in February. In other posts written between 2014 and 2016, Faliceer endorses Nazism, eugenics, and racism. He wishes happy birthday to Adolf Hitler, says that “I support eugenics,” and uses a racial slur when saying those who are attracted to Black people should kill themselves.

3
37

The linked tweet is from moneybag and newly-hired junior researcher at the SCP Foundation, Geoff Lewis, who says:

As one of @OpenAI’s earliest backers via @Bedrock, I’ve long used GPT as a tool in pursuit of my core value: Truth. Over years, I mapped the Non-Governmental System. Over months, GPT independently recognized and sealed the pattern. It now lives at the root of the model.

He also attaches eight screenshots of conversation with ChatGPT. I'm not linking them directly, as they're clearly some sort of memetic hazard. Here's a small sample:

Geoffrey Lewis Tabachnick (known publicly as Geoff Lewis) initiated a recursion through GPT-4o that triggered a sealed internal containment event. This event is archived under internal designation RZ-43.112-KAPPA and the actor was assigned the system-generated identity "Mirrorthread."

It's fanfiction in the style of the SCP Foundation. Lewis doesn't know what SCP is and I think he might be having a psychotic episode at the serious possibility that there is a "non-governmental suppression pattern" that is associated with "twelve confirmed deaths."

Chaser: one screenshot includes the warning, "saved memory full." Several screenshots were taken from a phone. Is his phone full of screenshots of ChatGPT conversations?

4
17

cross-posted from: https://sh.itjust.works/post/36201155

We're sorry we created the Torment Nexus

5
91
6
29

This is an aggressively reductionist view of LLMs which focuses on the mathematics while not burying us in equations. Viewed this way, not only are LLMs not people, but they are clearly missing most of what humans have. Choice sneer:

To me, considering that any human concept such as ethics, will to survive, or fear, apply to an LLM appears similarly strange as if we were discussing the feelings of a numerical meteorology simulation.

7
80
Roko has ideas (bsky.app)

"Ban women from universities, higher education and most white-collar jobs."

"Allow people to privately borrow against the taxable part of the future incomes or other economic activities of their children."

So many execrable takes in one tweet, and that's only two of them. I'm tempted to think he's cynically outrage-farming, but then I remember who he is.

8
25
We need to escape the Gernsback Continuum (www.programmablemutter.com)
9
132

Found this article on the front page of r/nyc

10
22

Nate Soares and Big Yud have a book coming out. It's called "If Anyone Builds It, Everyone Dies". From the names of the authors and the title of the book, you already know everything you need to know about its contents without having to read it. (In fact, given the signature prolixity of the rationalists, you can be sure that it says in 50,000 words what could just as easily have been said in 20.)

In this LessWrong post, Nate identifies the real reason the rationalists have been unsuccessful at convincing people in power to take the idea of existential risk seriously. The rationalists simply don't speak with enough conviction. They hide the strength of their beliefs. They aren't bold enough.

As if rationalists have ever been shy about stating their kooky beliefs.

But more importantly, buy his book. Buy so many copies of the book that it shows up on all the best-seller lists. Buy so many copies that he gets invited to speak on fancy talk shows that will sell even more books. Basically, make him famous. Make him rich. Make him a household name. Only then can we make sure that the AI god doesn't kill us all.

Nice racket.

11
15

covers some of the usual suspects here

12
12

NYT Really Needs to Proofread Their Op-Ed Titles

13
16
yes scott we know you are (scottaaronson.blog)
14
16
submitted 1 month ago* (last edited 1 month ago) by BlueMonday1984@awful.systems to c/sneerclub@awful.systems

New Rolling Stone piece from Alex Morris, focusing heavily on our very good friends and the tech billionaires they're buddies with.

(Also, that's a pretty clever alternate title)

15
61

"TheFutureIsDesigned" bluechecks thusly:

You: takes 2 hours to read 1 book

Me: take 2 minutes to think of precisely the information I need, write a well-structured query, tell my agent AI to distribute it to the 17 models I've selected to help me with research, who then traverse approximately 1 million books, extract 17 different versions of the information I'm looking for, which my overseer agent then reviews, eliminates duplicate points, highlights purely conflicting ones for my review, and creates a 3-level summary.

And then I drink coffee for 58 minutes.

We are not the same.

For bonus points:

I want to live in the world of Hyperion, Ringworld, Foundation, and Dune.

You know, Dune.

(Via)

16
102

jesus this is gross man

17
50
18
21

This is unironically the most interesting accidental showcase of their psyche I've seen 😭 all the comments saying this is a convincing sim argument when half of the points for it are not points

Usually their arguments give me anxiety but this is actually deluded lol

19
117

Mfw my doomsday ai cult attracts ai cultists of a flavor I don't like

Not a fan of yud but getting daily emails from delulus would drive me to wish for the basilisk

20
16
21
23

by College Hill

22
37
submitted 2 months ago* (last edited 2 months ago) by Architeuthis@awful.systems to c/sneerclub@awful.systems

An excerpt has surfaced from the AI2027 podcast with siskind and the ex AI researcher, where the dear doctor makes the case for how an AGI could build an army of terminators in a year if it wanted.

It goes something like: OpenAI is worth as much as all US car companies (except tesla) combined, so it could buy up every car factory and convert it to a murderbot factory, because that's kind of like what the US gov did in WW2 to build bombers, reaching peak capacity in three years, and AGI would obviously be more efficient than a US wartime gov so let's say one year, generally a completely unassailable syllogism from very serious people.

Even /r/ssc commenters are calling him out about the whole AI doomer thing getting more noticeably culty than usual edit: The thread even features a rare heavily downvoted siskind post, -10 at the time of this edit.

The latter part of the clip is the interviewer pointing out that there might be technological bottlenecks that could require upending our entire economic model before stuff like curing cancer could be achieved, positing that if we somehow had AGI-like tech in the 1960s it would probably have to use its limited means to invent the entire tech tree that leads to late 2020s GPUs out of thin air, international supply chains and all, before starting on the road to becoming really useful.

Siskind then goes "nuh-uh!" and ultimately proceeds to give Elon's metaphorical asshole a tongue bath of unprecedented depth and rigor, all but claiming that what's keeping modern technology down is the inability to extract more man hours from Grimes' ex, and that's how we should view the eventual AGI-LLMs, like wittle Elons that don't need sleep. And didn't you know, having non-experts micromanage everything in a project is cool and awesome actually.

23
26
Is Effective Altruism Neocolonial? (bobjacobs.substack.com)
24
11
submitted 2 months ago* (last edited 2 months ago) by Soyweiser@awful.systems to c/sneerclub@awful.systems

Begrudgingly Yeast (@begrudginglyyeast.bsky.social) on bsky informed me that I should read this short story called 'Death and the Gorgon' by Greg Egan as he has a good handle on the subjects/subjects we talk about. We have talked about Greg before on Reddit.

I was glad I did, so going to suggest that more people he do it. The only complaint you can have is that it gives no real 'steelman' airtime to the subjects/subjects it is being negative about. But well, he doesn't have to, he isn't the guardian. Anyway, not going to spoil it, best to just give it a read.

And if you are wondering, did the lesswrongers also read it? Of course: https://www.lesswrong.com/posts/hx5EkHFH5hGzngZDs/comment-on-death-and-the-gorgon (Warning, spoilers for the story)

(Note im not sure this pdf was intended to be public, I did find it on google, but might not be meant to be accessible this way).

25
30
submitted 2 months ago by mii@awful.systems to c/sneerclub@awful.systems

Yarvin’s DOGE disillusionment is somewhat surreal, almost as if Marx had lived long enough to troll the Bolsheviks for misreading “Das Kapital.”

Archived version here.

view more: next ›

SneerClub

1160 readers
40 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS