36
submitted 1 day ago* (last edited 1 day ago) by Architeuthis@awful.systems to c/sneerclub@awful.systems

An excerpt has surfaced from the AI2027 podcast with siskind and the ex AI researcher, where the dear doctor makes the case for how an AGI could build an army of terminators in a year if it wanted.

It goes something like: OpenAI is worth as much as all US car companies (except tesla) combined, so it could buy up every car factory and convert it to a murderbot factory, because that's kind of like what the US gov did in WW2 to build bombers, reaching peak capacity in three years, and AGI would obviously be more efficient than a US wartime gov so let's say one year, generally a completely unassailable syllogism from very serious people.

Even /r/ssc commenters are calling him out about the whole AI doomer thing getting more noticeably culty than usual edit: The thread even features a rare heavily downvoted siskind post, -10 at the time of this edit.

The latter part of the clip is the interviewer pointing out that there might be technological bottlenecks that could require upending our entire economic model before stuff like curing cancer could be achieved, positing that if we somehow had AGI-like tech in the 1960s it would probably have to use its limited means to invent the entire tech tree that leads to late 2020s GPUs out of thin air, international supply chains and all, before starting on the road to becoming really useful.

Siskind then goes "nuh-uh!" and ultimately proceeds to give Elon's metaphorical asshole a tongue bath of unprecedented depth and rigor, all but claiming that what's keeping modern technology down is the inability to extract more man hours from Grimes' ex, and that's how we should view the eventual AGI-LLMs, like wittle Elons that don't need sleep. And didn't you know, having non-experts micromanage everything in a project is cool and awesome actually.

top 44 comments
sorted by: hot top controversial new old
[-] YourNetworkIsHaunted@awful.systems 7 points 11 hours ago

Heartwarming: the worst person you know just outed themselves as a fucking moron

Even the people who are disagreeing are still kinda sneerable though. Like this guy:

Even in the worst case, DOGE firing too many people is not a particularly serious danger. Aside from Skynet, you should be worried about people using AI to help engineer deadly viruses or nuclear weapons, not firing government employees.

That's still assuming that the AI is a valuable tool for the purpose of genetic engineering or nuclear weapons manufacturing or whatever! Like, the hard part of building a nuke is very much in acquiring the materials, engineering everything to go off at the right time, and actually building it without killing yourself. Very little of that is meaningfully assisted by LLMs even if they did work as advertised. And there are so many people in that very thread alone going into detail on how biological engineering is incredibly hard in ways that similarly aren't bottlenecked by the kinds of things current AI structures can do. The level of comedically missing the point of the folks who keep trying to explain reality is off the charts.

[-] UltraGiGaGigantic@lemmy.ml 7 points 19 hours ago

"AI overlords are taking over!" - human overlords probably

[-] swlabr@awful.systems 25 points 1 day ago

An AGI could microwave a burrito so hot that not even the AGI, in its omnipotence, could eat it

[-] diz@awful.systems 11 points 23 hours ago* (last edited 23 hours ago)

It is as if there were people fantasizing about automaton mouths and lips and tongues and vocal cords for some reason, and come up with all these fantasies of how it'll be when automatons can talk.

And then Edison invents the phonograph.

And then they stick their you know what in the gearing between the cylinder and the screw.

Except somehow more stupid, because these guys are worried about AI apocalypse while boosting AI hype that pays for this supposed apocalypse.

edit: If someone said in 1850s "automatons won't be able to talk for another 150 years or longer because the vocal tract is too intricate", and some automaton fetishist says that they will be able to talk in 20 years, the phonograph shouldn't lend any credence whatsoever to the latter. What is different this time is that phonograph was genuinely extremely useful for what it is, while the generative AI is not quite as useful and they're going for the automaton fetishist money.

[-] swlabr@awful.systems 12 points 23 hours ago

“This thing we don’t understand yet is probably very simple and easy to replicate and I say this as someone who does not understand the thing yet because once again, nobody does!” - All “futurist” “genius” “thought leaders”

[-] SoftestSapphic@lemmy.world 5 points 20 hours ago* (last edited 20 hours ago)

Lmao the AGI tries to reason how to acheive world domination, but it's just trained on the open internet, and accidentally starts taking a sex fantasy blog about world domination as refrence for reality.

The AGI decides it needs to buy all the car factories and make murder bots, it gets stuck in an error loop because it can't interact correctly with the web portal that allows it to contact the first owner of the first factory. It runs in this loop forever, the CO2 emissions from its datacenter eventually choke out all large animal life on earth.

Then the jellyfish develop sentience next and are responsible and realize AGI and AI was just a marketing gimmick.

[-] Tar_alcaran@sh.itjust.works 12 points 1 day ago

A thing that doesn't exist and that we don't even have a concept of a plan of how to make, could easily do something extremely unlikely

[-] gerikson@awful.systems 17 points 1 day ago

Oh man this is peak venture capitalism crossed with Factorio - valuations are actually cash, and a factory is a black box where you just upload new software and other stuff comes out.

Let's take your average holder of car manufacturer stock. You're holding the stock because you believe the car manufacturer will continue making competitive products, and you'll get either dividends or higher valuations. Then OpenAI pitches up and offers you - what? They don't even have stock! Even if they did, you're exchanging a stake in something known for stake in an enterprise that have never made any cars, and when asked what kind of business plan they have they look shifty. No fucking way anyone will sell their stake for less than double what they have, especially if they find out the factory they're selling is gonna produce machines that will kill us all.

[-] Soyweiser@awful.systems 12 points 1 day ago

Yeah the financial illiteracy is quite high, on top of the rest. But dont worry AI nobel prize winners say it is possible!

(Are there multiple ai Nobel prize winners who are ai doomers?)

[-] scruiser@awful.systems 7 points 1 day ago* (last edited 1 day ago)

Stephen Hawking was starting to promote AI doomerism in 2014. But he's not a Nobel prize winner. Yoshua Bengio is a doomer, but no Nobel prize either, although he is pretty decorated in awards. So yeah looks like one winner and a few other notable doomers that aren't actually Nobel Prize winners somehow became winners plural in Scott's argument from authority. Also, considering the long list of example of Noble Disease, I really don't think Nobel Prize winner endorsement is a good way to gauge experts' attitudes or sentiment.

[-] Soyweiser@awful.systems 4 points 1 day ago

I was very tempted to go 'don't think it is more than one nobel guy, which is not great because of nobel disease anyway. I could link to rationalwiki here but that has come under threat because the people whos content you enjoy Scott started a lawsuit against them' but think that might be a bit culturewarry, and I also try not to react at the places we point towards. As that just leads to harassment like behaviour. Also Penrose is a Nobel prize winner who is against AGI stuff.

[-] scruiser@awful.systems 6 points 1 day ago

Yeah it's really not productive to engage directly.

I'd almost categorize Penrose as a borderline case of noble disease himself for stuff he's said about Quantum Consciousness and relatedly the halting problem and Godel's incompleteness theorem. But he actually has a proposed mechanism (involving microtubules) that is testable and falsifiable and the physics half of what he is talking about is within his domain of expertise.

[-] pcalau12i@lemmy.world 6 points 20 hours ago* (last edited 14 hours ago)

This is why I very much dislike Popperism. Popperites are convinced "science = falsifiability." If I argue that the universe is made of cheese and the mechanism is a wizard that you can only see through a telescope with a special handcrafted ruby lens that I sell at my shop for $4000, should research institutions be expected to take my claim seriously and buy my ruby lens to test it? I mean, it's technically falsifiable, either they will look through the lens and see the wizard and universe of cheese or they will not. If you are a Popperite you have no choice but to admit that it is a legitimate scientific theory.

There should be more to a scientific proposal than it technically being "falsifiable." Penrose's "theory" is quantum mysticism, it is not a scientific theory just because it is in principle testable.

  1. He bases it on a claim that Godel's theorem shows certain things are non-computable but we can choose to believe those non-computable things anyways, therefore that proves "consciousness" is non-computable. This is just a comically ridiculous argument. You can program an AI to believe in things it cannot prove as well. It doesn't prove anything.
  2. He claims that there is a physical collapse of the wave function, with zero evidence to back it, and it is caused by gravity. His theory is incredibly speculative and not compatible with the predictions of quantum mechanics and not even with special relativity, and all attempts to test it have turned out negative.
  3. He claims that since this "collapse" isn't computable and his comically bad argument #1 shows "consciousness" isn't computable, therefore quantum mechanics causes consciousness, and so we should search desperately for anything in the brain that looks vaguely quantum mechanical as "evidence."

It's even more ridiculous when you realize that microtubles are structural, they don't play a role in information processing in the brain, and you have microtubules all throughout your body. Them having quantum effects in them is meaningless. Even if you could empirically demonstrate without a shadow of doubt that microtubules do somehow create coherent quantum states that the brain makes use of, that would just be an interesting fact on its own. It would not prove #1 or #2. Microtubules are not a "mechanism" for #1 and #2, even if they played a role in decision making as if the brain is a quantum computer (they don't), then you cannot derive from this that somehow quantum mechanics explains why people can believe things without proof (why do I even have to say this, it's so stupid!) or that the reduction of the wave function is a physical process caused by gravity.

There is no good argument to believe even #1 or #2 are tied together. Even if you proved there is indeed a non-local physical collapse and overturned all our modern scientific theories, that wouldn't demonstrate #1 or #3 either. None of the claims in the theory have any obvious connections to one another other than spurious, largely incoherent arguments. This is not his domain of expertise. You could argue #2 is within his domain, but #1 and #3 are nowhere near his domain.

Physicists have proposed speculative physical collapse theories before, like GRW, and we forget about them because they were interesting but went nowhere because there is zero evidence "collapse" is a physical process, and treating it as such requires overturning all of modern physics, as it could not be made compatible with special or general relativity nor could it reproduce the predictions of quantum mechanics, requiring you to rewrite all of physics from the ground-up. The reduction of the wave function is a measurement update, it is epistemic, there is no evidence that it is a physical process.

Even then, theoretical physicists speculate about a lot of things that turn out to go nowhere, that itself is par for the course. But Penrose goes above and beyond this and branches into philosophy, biology, and neuroscience and starts using comically bad arguments to try and tie them all together. Those are not his areas of expertise at all. It reminds me of the old essay Natural Science and the Spirit World from the 19th century that documented a lot of renowned scientists who also had completely crazy side projects, like Alfred Wallace, the guy who co-discovered evolution by natural selection, who believed he could also raise spirits from the dead and converse with them.

[-] Mitchell_Porter@awful.systems 2 points 5 hours ago

There's a bit more to Penrose's ideas than that.

For example, his version of gravity-driven wavefunction collapse was motivated by Hawking's argument for information loss in black hole evaporation. In the era of string theory, it seems a majority of quantum gravity theorists think that information is conserved, but back in the day Hawking's position was a serious one, and Penrose had the ingenious idea that information gain in wavefunction collapse could in some sense balance information loss in quantum gravity, for a net conservation of phase space volume.

Another example - that a quantum-gravitational process could be noncomputable - actually makes sense, since the path integral involves 4-manifolds and some properties of 4-manifolds are actually undecidable. I agree that there's something wrong with his argument that metamathematical thought must supervene on some kind of trans-Turing computation, since it rests on humans having unlimited metamathematical knowledge rather than just belief. But you can't really hope to disentangle all the issues here without having some theory of intentionality and how material states even manage to be about anything, and he doesn't go there.

As for the microtubules, neuronal microtubules do have some distinctive properties, e.g. they line up with the axon. It's tempting to suppose that there's an electromagnetic interaction between the membrane action potential and electronic states in the microtubule. There are a handful of people who work on topics like this, but it would be enormously difficult to demonstrate such an interaction, if the debate over quantum speedups in photosynthesis is any guide.

Fashionable biophysical speculation seems to have moved on to the ideas of Karl Friston and Michael Levin, but I still esteem Penrose's speculations. I sometimes think of it as a science-fictional anticipation of what the actual truth will be, the way that Einstein worked on a unified field theory a few decades before the mainstream started talking about a theory of everything.

[-] scruiser@awful.systems 4 points 17 hours ago* (last edited 17 hours ago)

Yeah I pretty much agree. Penrose compares favorably to other cases of noble disease because the bar is so low (the Wikipedia page has got examples of racism, eugenics, homeopathy, astrology), not because his ideas about Quantum consciousness are actually good. It's not good to cite Penrose as someone notable who disagrees with the possibility of AGI because the reason he disagree is because he believes in Quantum mysticism and misunderstands Godel’s theorem and computer science.

[-] Architeuthis@awful.systems 8 points 1 day ago

(Are there multiple ai Nobel prize winners who are ai doomers?)

There's Geoffrey Hinton I guess, even if his 2024 Nobel in (somehow) Physics seemed like a transparent attempt at trend chasing on behalf of the Nobel committee.

[-] Soyweiser@awful.systems 6 points 1 day ago

That is the one I was thinking of, the way the comments are phrased makes it seem like there are a lot of winners who are doomers. Guess Hinton is a one man brigade.

[-] BigMuffin69@awful.systems 8 points 1 day ago

I think Demis Hassabis (chemistry for alpha fold) has said the chance of AI killing all of humanity is somewhere between 0 and 100%.

[-] diz@awful.systems 9 points 23 hours ago

is somewhere between 0 and 100%.

That really pins it down, doesn't it?

[-] Soyweiser@awful.systems 15 points 1 day ago* (last edited 1 day ago)

and that’s how we should view the eventual AGI-LLMs, like wittle Elons that don’t need sleep.

Wonder how many people stopped being AI-doomers after this. I use the same argument against ai-doom.

E: the guy doing the most basic 'It really is easier to imagine the end of the world than the end of capitalism.' bit in the comments and have somebody just explode in 'not being able to imagine it properly' is a bit amusing. I know how it feels to just have a massive hard to control reaction over stuff like that but oof what are you doing man. And that poor anti-capitalist guy is in for a rude awakening when he discovers what kind of place r/ssc is.

E2: Scott is now going 'this clip is taken out of context!' not that the context improves it. (He claims he was explaining what others believe not what he believes, but if that is so, why are you so aggressively defending the stance? Hope this Scott guy doesn't have a history of lying about his real beliefs).

[-] Architeuthis@awful.systems 11 points 1 day ago* (last edited 1 day ago)

He claims he was explaining what others believe not what he believes

Others as in specifically his co-writer for AI2027 Daniel Kokotlajo, the actual ex-OpenAI researcher.

I'm pretty annoyed at having this clip spammed to several different subreddits, with the most inflammatory possible title, out of context, where the context is me saying "I disagree that this is a likely timescale but I'm going to try to explain Daniel's position" immediately before. The reason I feel able to explain Daniel's position is that I argued with him about it for ~2 hours until I finally had to admit it wasn't completely insane and I couldn't find further holes in it.

Pay no attention to this thing we just spent two hours exhaustively discussing that I totally wasn't into, it's not really relevant context.

Also the title is inflammatory only in the context of already knowing him for a ridiculous AI doomer, otherwise it's fine. Inflammatory would be calling the video economically illiterate bald person thinks evaluations force-buy car factories, China having biomedicine research is like Elon running SpaceX .

[-] BigMuffin69@awful.systems 9 points 1 day ago

I couldn’t find further holes in it

Here's a couple:

  1. iirc it claims we'll have reliable "agents" in mid 2025. Fellas it's almost June in the year of the "agents" and frankly I don't see shit. We are not starting strong here.
  2. they predict a 10k person anti-AI protest in DC. For context, the recent "Hands Off" protest in DC saw 100k person turnout. Israel / Palestine protest saw 300K in DC in 2023. A ten-thousand-person protest isn't really anything out of the ordinary? It's almost like the authors have never been to a protest, don't understand collective action because they live in a bubble or something? But they assure us, this document is thoroughly researched maybe their point was self-deprecating, "woe is us, only 10K people show up :("
  3. When they get into their super agi fanfic, they describe Agent-n as "never stops training" continuously learning from the environment. Like the only way I read this is that somehow, we discover paradigm shifting algorithmic discoveries by coincidence in the next couple years that make DL obsolete so we can abandon train-inference approaches and instead have this embodied entity that is constantly taking feedback from the environment to "train" but the system itself is still described under the massive data center heavy DL framework. It's like they know that bio intelligence has this continuous feedback mechanism, so obviously ai researchers will just patch that in, how hard can it be?
  4. Ong, i swear they just put in there at some point "hallucinations are solved" the thing they have been claiming will be solved in the next month since 2023.
[-] Architeuthis@awful.systems 2 points 12 hours ago* (last edited 12 hours ago)

Microsoft's Visual Studio says it's going to incorporate coding 'agents' as soon as maybe the next minor version. I can't really see them buying up car factories or beating pokemon, but agent- as an AI marketing term is definitely a part of the current hype cycle.

[-] scruiser@awful.systems 3 points 17 hours ago

Fellas it’s almost June in the year of the “agents” and frankly I don’t see shit.

LLM agents can beat Pokemon... if you give them enough customized tools and prompting that with the same number of lines of instruction you could just directly code a bot that beats Pokemon without an LLM in the first place. And you don't mind the LLM agent playing much much worse than literal children.

[-] mountainriver@awful.systems 5 points 21 hours ago
  1. You get better at being smart by INT-grinding. A machine could be INT-grinding the whole time. It's like in Oblivion if you wanted to grind Speed you could go into a city, stand in a doorway and place something heavy on the jump key on the keyboard. Then while you take care of the dishes or something, your character grinds. But for INT!

If it gets smart enough it will start finding hacks, like those INT- increasing potions in Morrowind that increased your Alchemy so you could make even better INT-potions.

It might even get smart enough to escape the Elder Scrolls; and start playing another game!

[-] Soyweiser@awful.systems 1 points 7 hours ago

Int grinding is for noobs, just brew potions which increase your potion brewing skill. Then make an int pot.

[-] Architeuthis@awful.systems 5 points 12 hours ago* (last edited 12 hours ago)

That IQ after a certain level somehow turns into mana points is a core rationalist assumption about how intelligence works.

[-] YourNetworkIsHaunted@awful.systems 3 points 17 hours ago

I'm not very up on my Elder Scrolls lore, but I think this is where I'm supposed to say something about CHIM?

[-] BigMuffin69@awful.systems 6 points 21 hours ago* (last edited 21 hours ago)

Grinding in Oblivion you say?

[-] AllNewTypeFace@leminal.space 3 points 19 hours ago

Isn’t that basically Yud’s robot god inferring general relativity from three frames of video of an apple falling?

[-] BigMuffin69@awful.systems 10 points 1 day ago* (last edited 1 day ago)

Daniel Kokotlajo, the actual ex-OpenAI researcher

Unclear to me what Daniel actually did as a 'researcher' besides draw a curve going up on a chalkboard (true story, the one interaction I had with LeCun was showing him Daniel's LW acct that is just singularity posting and Yann thought it was big funny). I admit, I am guilty of engineer gatekeeping posting here, but I always read Danny boy as a guy they hired to give lip service to the whole "we are taking safety very seriously, so we hired LW philosophers" and then after Sam did the uno reverse coup, he dropped all pretense of giving a shit/ funding their fan fac circles.

Ex-OAI "governance" researcher just means they couldn't forecast that they were the marks all along. This is my belief, unless he reveals that he superforecasted altman would coup and sideline him in 1998. Someone please correct me if I'm wrong, and they have evidence that Daniel actually understands how computers work.

[-] Architeuthis@awful.systems 6 points 1 day ago

Didn't mean to imply otherwise, just wanted to point out that the call is coming from inside the house.

[-] BigMuffin69@awful.systems 6 points 1 day ago

np, im just screaming into the void on this beautiful Monday morning

[-] scruiser@awful.systems 8 points 1 day ago

He claims he was explaining what others believe not what he believes, but if that is so, why are you so aggressively defending the stance?

Literally the only difference between Scott's beliefs and AI:2027 as a whole is his ~~prophecy~~ estimate is a year or two later. (I bet he'll be playing up that difference as AI 2027 fails to happen in 2027, then also doesn't happen in 2028.)

Elsewhere in the thread he whines to the mods that the original poster is spamming every subreddit vaguely lesswrong or EA related with engagement bait. That poster is katxwoods... as in Kat Woods... as in a member of Nonlinear, the EA "organization" whose idea of philanthropic research was nonstop exotic vacations around the world. And, iirc, they are most infamous among us sneerer for "hiring" an underpaid (really underpaid, like couldn't afford basic necessities) intern they also used as a 24/7 live-in errand girl, drug runner, and sexual servant.

[-] Soyweiser@awful.systems 5 points 1 day ago* (last edited 1 day ago)

Deleted earlier message, sorry I called Scott out for not doing things he had done. Even if the whole mods 'restricting her messages now only after she went after Scott' is quite iffy. (LW people write normally challenge failed "One upfront caveat. I am speaking about “Kat Woods” the public figure, not the person. If you read something here and think, “That’s not a true/nice statement about Kat Woods”, you should know that I would instead like you to think “That’s not a true/nice statement about the public persona Kat Woods, the real human with complex goals who I'm sure is actually really cool if I ever met her, appears to be cultivating.”" (The idea is good, this just reads like a bit of a sovcit style text and could have been replaced with 'I mean this not as an attack on her personally, I'm just doubting the effectiveness of her spammy posting style'). (E: I do agree with them however, not the 'we should check if this is effective' but more that the posting style is low effort, annoying, boring, dated, bit cringe etc).

Also: Scott: 'Mods mods mods, kat spill my jice help hel help help'

[-] YourNetworkIsHaunted@awful.systems 3 points 11 hours ago

I am of course referring here to KAT WOODS the fictional corporate person, and not x_X_69_kat-of-the-family-woods_69_X_x the flesh and blood woman created by our Lord and Savior.

[-] Architeuthis@awful.systems 8 points 1 day ago

Also, add obvious and overdetermined to the pile of siskindisms next to very non-provably not-correct.

[-] gerikson@awful.systems 6 points 1 day ago

Wow he looks even dorkier in video than in photos.

[-] shinigami3@awful.systems 2 points 4 hours ago

I know it's bad form criticizing people's appearances but I had never saw him before and I couldn't help but think, of course he looks like that

[-] Architeuthis@awful.systems 7 points 1 day ago
[-] YourNetworkIsHaunted@awful.systems 2 points 11 hours ago

I was thinking more Bunsen Honeydew, actually.

[-] scruiser@awful.systems 4 points 17 hours ago

That's unfair.

Beaker deserves better than to get compared to a eugenicist ~~crypto~~fascist.

[-] aninjury2all@awful.systems 3 points 15 hours ago

Judging solely by appearances I have some bad news for Scoot if his eugenics fantasies ever came to pass...

this post was submitted on 19 May 2025
36 points (100.0% liked)

SneerClub

1095 readers
51 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS