When the idea of self-driving cars first started becoming mainstream, I remember a lot of debate about liability. If an accident occurs, who would be at fault? I think a lot of those questions are still unanswered.
Fast forward and now we have software like ChatGPT. I assume they'll only become more capable (and connected) over time.
Which makes it strange I haven't really heard any similar discussion around liability. What happens when it makes mistakes or causes damage?
Maybe in people's minds it doesn't matter, because AI is either something that helps with homework questions, or something that's taking over humanity. Reality is probably in between those two, with much more mundane mistakes or damages done.
What happens when the first ransomware is deployed by AI, on behalf of a user who just wanted tips on how to make more side income?
It isn't how it works today. I'm talking about sometime in the distant (or near) future. Surely at some point AI will have the capabilities on par with at least a low level hacker.
Or, if you still think that's a stretch, just imagine all the ways perfectly legitimate software can cost companies money. Not through malicious design, but just by mistakes.
that would be very distant future. what we have today is machine learning. it is a powerful tool, but it is not in any way intelligent. it is not going to realize itself and start war with humanity. we wouldn't know how to create something like that if we wanted.
if person uses that tool to generate some code that will cause damage, the consequences will be same as if that person writes that code entirely by hand. but it is not going to miraculously "go wild" and create some ethical dilemma where we wouldn't know who is responsible.