-2
top 5 comments
sorted by: hot top controversial new old
[-] AwesomeLowlander@sh.itjust.works 3 points 5 hours ago

https://www.aipanic.news/p/ai-blackmail-fact-checking-a-misleading

Why the claims about AI 'self-preservation' are so much BS

[-] nesc@lemmy.cafe 6 points 8 hours ago* (last edited 8 hours ago)

How can they 'self-preserve' exactly? This story in various forms was repeated for years and always sounded extremely artificial and cultish.

[-] EpeeGnome@feddit.online 1 points 1 minute ago

Link seems to be dead, so I'll just assume the obvious. The word token machine put together some scary words. Since it arranges word tokens into a really coherent order, I'm convinced it has consciousness and those words represent a coherent thought about its scary plans.

More seriously, there's a repeating pattern that can be found in the training data where threats to someone's existence are followed by words to try to keep existing, so it should be no surprise when it spits that pattern out sometimes.

[-] Perspectivist@feddit.uk 2 points 6 hours ago* (last edited 6 hours ago)

Before LLMs were a thing, the argument for why an "AI in a box" would always eventually escape - and why pulling the plug isn't an option - was essentially that it would convince you otherwise, since it has no means to physically stop you. The idea is that a true Artificial General Intelligence would always outreason the scientists, or if that doesn't work, bribe or blackmail them.

It's a bit tricky thought experiment, in the sense that as humans we're by definition incapable of thinking up such a compelling argument. But I think one way to approach it is by imagining how easy it would be to pull that off on a 3-year-old when you're an adult yourself. A true AGI would likely be orders of magnitude more intelligent than an adult human, so the gap between us would be even greater than that to a human child.

I've heard of a case where some journalist challenged this argument, stating that there's no way they'd let it out. That then led to them playing out the scenario, with someone else acting as the AI. Soon after, as per the rules of the game, that journalist tweeted that they let the AI out. If I don't remember this incorrectly, they even replayed the scenario and it was let out again.

In hindsight, it's hilariously naive that we ever thought we'd keep AI off the internet until we were 100 percent sure it was safe. We ended up doing the exact opposite, even though we haven't reached AGI yet.

[-] KnitWit@lemmy.world 4 points 7 hours ago

Guy trying to sell a thing: My only worry is that it’s TOO GOOD!

this post was submitted on 31 Dec 2025
-2 points (44.4% liked)

Futurology

3558 readers
49 users here now

founded 2 years ago
MODERATORS