95
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 05 Jun 2024
95 points (100.0% liked)
Technology
37719 readers
103 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
Your scenario 1 is the actual danger. It's not that AI will outsmart us and kill us. It's that AI will trick us into trusting them with more responsibility than the AI can responsibly handle, to disastrous results.
It could be small scale, low stakes stuff, like an AI designing a menu that humans blindly cook. Or it could be higher stakes stuff that actually does things like affect election results, crashes financial markets, causes a military to target the wrong house, etc. The danger has always been that humans will act on the information provided by a malfunctioning AI, not that AI and technology will be a closed loop with no humans involved.
Yup, it is a real risk. But on a lighter side, it's a risk that we [humanity] have been fighting against since forever - the possibility of some of us causing harm to the others not due to malice, but out of assumptiveness and similar character flaws. (In this case: "I assume that the AI is reliable enough for this task.")