I think you're talking about accelerationism. IMO, the main problem with unrestrained AI growth is that if AI turns out to be as good as the hype says it is, then we'll all be dead before revolution occurs.
In that case, you should know that Geoff Hinton (the guy whose lab kicked off the whole AI revolution last decade) quit Google in order to warn about the existential risk of AI. He believes there's at least a 10% chance that it will kill us all within 30 years. Ilya Sutskever, his former student and co-founder of OpenAI, believes similarly, which is why he quit OpenAI and founded Safe Superintelligence (yes that basic html document really is their homepage) to help solve the alignment problem.
You can also find popular rationalist AI pundits like gwern, acx, yudkowsky, etc. voicing similar concerns, with a range of P(doom) from low to the laughably high.
Yes I know, the robot apocalypse people seem desperate to be afraid of is always just around the corner. Geoff Hinton, while a definite pioneer in AI, didn't kick anything off, he was one of a large number of people working on it, and one of a small number predicting armageddon.
The reason it's always just around the corner is because there is very strong evidence we're approaching the singularity. Why do you sound sarcastic saying this? What probability would you assign to an AI apocalypse in the next three decades?
Geoff Hinton absolutely kicked things off. Everybody else had given up on neural nets for image recognition, but his breakthrough renewed interest throughout the world. We wouldn't have deepdreaming slugdogs without him.
It should not be surprising that most people in the field of AI are not predicting armageddon, since it would be harmful to their careers to do so. Hinton is also not predicting the apocalypse -- he's saying 10-20% chance, which is actually a prediction that it won't happen.
I'm sarcastic because I would assign the same probability as a zombie apocalypse. At the nuts and bolts level I think they're both technically flawed on a Hollywood fantasy level.
What does an AI apocalypse even look like to you? Computers launching nuclear missiles or what? Shutting down power grids?
I think you're talking about accelerationism. IMO, the main problem with unrestrained AI growth is that if AI turns out to be as good as the hype says it is, then we'll all be dead before revolution occurs.
The trick is to judge things on their own merit and not on the hype around them.
In that case, you should know that Geoff Hinton (the guy whose lab kicked off the whole AI revolution last decade) quit Google in order to warn about the existential risk of AI. He believes there's at least a 10% chance that it will kill us all within 30 years. Ilya Sutskever, his former student and co-founder of OpenAI, believes similarly, which is why he quit OpenAI and founded Safe Superintelligence (yes that basic html document really is their homepage) to help solve the alignment problem.
You can also find popular rationalist AI pundits like gwern, acx, yudkowsky, etc. voicing similar concerns, with a range of P(doom) from low to the laughably high.
Yes I know, the robot apocalypse people seem desperate to be afraid of is always just around the corner. Geoff Hinton, while a definite pioneer in AI, didn't kick anything off, he was one of a large number of people working on it, and one of a small number predicting armageddon.
The reason it's always just around the corner is because there is very strong evidence we're approaching the singularity. Why do you sound sarcastic saying this? What probability would you assign to an AI apocalypse in the next three decades?
Geoff Hinton absolutely kicked things off. Everybody else had given up on neural nets for image recognition, but his breakthrough renewed interest throughout the world. We wouldn't have deepdreaming slugdogs without him.
It should not be surprising that most people in the field of AI are not predicting armageddon, since it would be harmful to their careers to do so. Hinton is also not predicting the apocalypse -- he's saying 10-20% chance, which is actually a prediction that it won't happen.
I'm sarcastic because I would assign the same probability as a zombie apocalypse. At the nuts and bolts level I think they're both technically flawed on a Hollywood fantasy level.
What does an AI apocalypse even look like to you? Computers launching nuclear missiles or what? Shutting down power grids?