We're not even remotely close. The promise of AGI is part of the AI hype machine and taking it seriously is playing into their hands.
Irrelevant at best, harmful at worst 🤷
We're not even remotely close. The promise of AGI is part of the AI hype machine and taking it seriously is playing into their hands.
Irrelevant at best, harmful at worst 🤷
"Dude trust me, just give me 40 billion more dollars, lobby for complete deregulation of the industry, and get me 50 more petabytes of data, then we will have a little human in the computer! RealshitGPT will have human level intelligence!"
We’re not even remotely close.
That’s just the other side of the same coin whose flip side claims AGI is right around the corner. The truth is, you couldn’t possibly know either way.
The truth is, you couldn’t possibly know either way.
I think the argument is we're not remotely close when considering the specific techniques used by current generation of AI tools. Of course people can make new discovery any day and achieve AGI but it's a different discussion.
That's true in a somewhat abstract way, but I just don't see any evidence of the claim that it is just around the corner. I don't see what currently existing technology can facilitate it. Faster-than-light travel could also theoretically be just around the corner, but it would surprise me if it was, because we just don't have the technology.
On the other hand, the people who push the claim that AGI is just around the corner usually have huge vested interests.
In some dimensions, current day LLMs are already superintelligent. They are extremely good knowledge retrieval engines that can far outperform traditional search engines, once you learn how properly to use them. No, they are not AGIs, because they're not sentient or self-motivated, but I'm not sure those are desirable or useful dimensions of intellect to work towards anyway.
I think that's a very generous use of the word "superintelligent". They aren't anything like what I associate with that word anyhow.
I also don't really think they are knowledge retrieval engines. I use them extensively in my daily work, for example to write emails and generate ideas. But when it comes to facts they are flaky at best. It's more of a free association game than knowledge retrieval IMO.
We can change course if we can change course on capitalism
Cataclysms notwithstanding, human-level AI is inevitable. That doesn't have to mean that it'll be next week, or even next century, but it will happen.
The only way it won't is if humans are wiped out. (And even then there might be extra-terrestrials who get there where we didn't. Human-level doesn't have to mean invented by humans.)
AI will not threaten humans due to sadism or boredom, but because it takes jobs and makes people jobless.
When there is lower demand for human labor, according to the rule of supply and demand, prices (aka. wages) for human labor go down.
The real crisis is one of sinking wages, lack of social safety nets, and lack of future perspective for workers. That's what should actually be discussed.
But scary robots will take over the world! That's what all the movies are about! If it's in a movie, it has to be real.
Human level? That’s not setting the bar very high. Surely the aim would be to surpass human, or why bother?
Yeah. Cheap labor is so much better than this bullshit
The path to AGI seems inevitable - not because it’s around the corner, but because of the nature of technological progress itself. Unless one of two things stops us, we’ll get there eventually:
Either there’s something fundamentally unique about how the biological brain processes information - something that cannot, even in principle, be replicated in silicon,
Or we wipe ourselves out before we get the chance.
Barring those, the outcome is just a matter of time. This argument makes no claim about timelines - only trajectory. Even if we stopped AI research for a thousand years, it’s hard to imagine a future where we wouldn’t eventually resume it. That's what humans do; improve our technology.
The article points to cloning as a counterexample but that’s not a technological dead end, that’s a moral boundary. If one thinks we’ll hold that line forever, I’d call that naïve. When it comes to AGI, there’s no moral firewall strong enough to hold back the drive toward it. Not permanently.
something that cannot, even in principle, be replicated in silicon
As if silicon were the only technology we have to build computers.
Did you genuinely not understand the point I was making, or are you just being pedantic? "Silicon" obviously refers to current computing substrates, not a literal constraint on all future hardware. If you’d prefer I rewrite it as "in non-biological substrates," I’m happy to oblige - but I have a feeling you already knew that.
And why is "non-biological" a limitation?
A lot of people making baseless claims about it being inevitable...i mean it could happen but the hard problem of consciousness is not inevitable to solve
It’s just a cash grab to take peoples jobs and give it to a chat bot that’s fed Wikipedia’s data on crack.
Don't confuse AGI with LLMs. Both being AI systems is the only thing they have in common. They couldn't be further apart when it comes to cognitive capabilities.
Why would we want to? 99% of the issues people have with "AI" are just problems with society more broadly that AI didn't really cause, only exacerbated. I think it's absurd to just reject this entire field because of a bunch of shitty fads going on right now with LLMs and image generators.
Ummm no? If moneyed interests want it then it happens. We have absolutely no control over whether it happens. Did we stop Recall from being forced down our throats with windows 11? Did we stop Gemini from being forced down our throats?
If capital wants it capital gets it. :(
😳 unless we destroy capitalism? 👉🏾👈🏾
The only problem with destroying capitalism is deciding who gets all the nukes.
Capitalism is just an economic system, I'm not sure what nukes has to do with it. It's not like billionaires directly own them, and we have to distribute the "nuke wealth" to the people or anything lol
Use Linux and don’t have any of those issues.
Get off the capitalist owned platforms.
In the US, sure, but there have been class revolts in other nations. I’m not saying they lead to good outcomes, but king Louis XVI was rich. And being rich did not save him. There was a capitalist class in China during the cultural revolution. They didn’t make it through. If it means we won’t go extinct, why can we have a revolution to prevent extinction?
Honestly I welcome our AI overlords. They can't possibly fuck things up harder than we have.
Can't they?
This is a most excellent place for technology news and articles.