-3
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 31 Dec 2025
-3 points (42.1% liked)
Futurology
3558 readers
52 users here now
founded 2 years ago
MODERATORS
Before LLMs were a thing, the argument for why an "AI in a box" would always eventually escape - and why pulling the plug isn't an option - was essentially that it would convince you otherwise, since it has no means to physically stop you. The idea is that a true Artificial General Intelligence would always outreason the scientists, or if that doesn't work, bribe or blackmail them.
It's a bit tricky thought experiment, in the sense that as humans we're by definition incapable of thinking up such a compelling argument. But I think one way to approach it is by imagining how easy it would be to pull that off on a 3-year-old when you're an adult yourself. A true AGI would likely be orders of magnitude more intelligent than an adult human, so the gap between us would be even greater than that to a human child.
I've heard of a case where some journalist challenged this argument, stating that there's no way they'd let it out. That then led to them playing out the scenario, with someone else acting as the AI. Soon after, as per the rules of the game, that journalist tweeted that they let the AI out. If I don't remember this incorrectly, they even replayed the scenario and it was let out again.
In hindsight, it's hilariously naive that we ever thought we'd keep AI off the internet until we were 100 percent sure it was safe. We ended up doing the exact opposite, even though we haven't reached AGI yet.