183
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 19 Jul 2023
183 points (84.0% liked)
Technology
59674 readers
1915 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
Here's the thing. The Terminator movies were a warning against government/army AI. Actually slightly before that I guess wargames was too. But, honestly I'm not worried about military AI taking over.
I think if the military setup an AI, they would have multiple ways to kill it off in seconds. I mean, they would be in a more dangerous position to have an AI "gone wild". But not because of the movies, but because of how they work, they would have a lot of systems in place to mitigate disaster. Is it possible to go wrong? Yes. Likely? No.
I'm far more worried about the geeky kid that now has access to open source AI that can be retasked. Someone that doesn't understand the consequences of their actions fully, or at least can't properly quantify the risks they're taking. But, is smart enough to make use of these tools to their own end.
Some of you might still be teenagers, but those that aren't, remember back. Wouldn't you potentially think it'd be cool to create an auto gpt or some form of adversarial AI with an open ended success criteria that are either implicitly dangerous and/or illegal, or are broad enough to mean the AI will see the easiest path to success is to do dangerous and/or illegal things to reach its goal. You know, for fun. Just to see if it would work.
I'm not convinced the AI is quite there yet to be dangerous, or maybe it is. I've honestly not kept close tabs on this. But, when it does reach that level of maturity, a lot of the tools are still open source, they can be modified, any protections removed "for the lols" or "just to see what it can do" and someone without the level of control a government/military entity has could easily lose control of their own AI. That's what scares me, not a Joshua or Skynet.
The biggest risk of AI at the moment is the same posed by the Industrial Revolution: Many professions will become obsolete, and it might be used as leverage to impose worse living conditions over those who still have jobs.
That's a real concern. In the long run it will likely backfire. AI needs human input to work. If it starts getting other AI fed as its input, things will start to go bad in a fairly short order. Also, that is another point. Big business is likely another probable source of runaway AI. I trust business use of AI less than anyone else.
There's also a critical mass to unemployment to which revolution is inevitable. There would likely be UBI and an assured standard of living when we get close to that, and you'd be able to try to make extra money from your passion. I don't doubt that corporations will happily dump their employees for AI at a moment's notice once it's proved out. Big business is extremely predictable in that sense. Zero forward planning beyond the current quarter. But I have some optimism that some common sense would prevail from some source, and they'd not just leave 50%+ of the population to die slowly.
The army loves a chain of command. I don't see this changing with AI. The army just putting AI in the commander's seat and letting it roll just doesn't sound credible to me.