As a Data Scientist I can say: The present danger from AI isn't the the singularity. That's science fiction. It's the lack of comprehension what an AI is & the push to involve it more and more into certain decision making processes.
Current AIs are at there cores just statistical models, that assign probabilities to answers, based on previously observed data.
Governments and cooperations around the globe try to use these models to automate decisions.
One massive problem here is the lack of transparency and human bias in the data.
For example, when a cooperation uses an AI to determine who should be fired. You get fired, you try to complain, but you just get the answer the maschine had a wide variety of input data & you should have worked harder.
We experienced in the past, that AIs focus on things we don't necessarily want them to focus on. In the example from above maybe your job performance was better then your colleges Dave, but you are a PoC and Dave is white.
In the past PoCs were more prone to get fired, so the AI decided, that you are the most probable answer to the question 'Who should we fire?'.
If a human would have made the decision you could interview him and discover the underlying racism in this decision. Deciphering the decision of an AI is next to impossible.
So we slowly take away our ability to address wrongs in our burocratic processes, by cementing them into statistical models & thereby removing our ability to improve our societal values. AI has the potential to grind societies progress to a halt & drag easily fixable problems decades or centuries into the future.
So you eliminate race as a possible input. Now it finds proxies. Non-standard name? Address? What holidays you take off? Maternity leave gaps can signal parenthood. Patterns of time off/FMLA can align with treatments. It's hard to choose relevant inputs without choosing revealing inputs.
The real-real issue is how many people don't bloody understand what AI/ML are, but are making huge decisions about where they are appropriate to use.
I can't count how many times I've heard "Let's add AI to this page!" was requested from non-tech execs in the last year, not knowing what does or doesn't work. Our most successful analytical report runs a simple 10-rule heuristic and nobody is the wiser.
So yeah, people trying to inject AI into hiring/firing. The people who did inject AI into predicting criminality. It all boils down to negligence, ignorance of your tools.
As a Data Scientist I can say: The present danger from AI isn't the the singularity. That's science fiction. It's the lack of comprehension what an AI is & the push to involve it more and more into certain decision making processes.
Current AIs are at there cores just statistical models, that assign probabilities to answers, based on previously observed data.
Governments and cooperations around the globe try to use these models to automate decisions. One massive problem here is the lack of transparency and human bias in the data.
For example, when a cooperation uses an AI to determine who should be fired. You get fired, you try to complain, but you just get the answer the maschine had a wide variety of input data & you should have worked harder.
We experienced in the past, that AIs focus on things we don't necessarily want them to focus on. In the example from above maybe your job performance was better then your colleges Dave, but you are a PoC and Dave is white. In the past PoCs were more prone to get fired, so the AI decided, that you are the most probable answer to the question 'Who should we fire?'.
If a human would have made the decision you could interview him and discover the underlying racism in this decision. Deciphering the decision of an AI is next to impossible.
So we slowly take away our ability to address wrongs in our burocratic processes, by cementing them into statistical models & thereby removing our ability to improve our societal values. AI has the potential to grind societies progress to a halt & drag easily fixable problems decades or centuries into the future.
So you eliminate race as a possible input. Now it finds proxies. Non-standard name? Address? What holidays you take off? Maternity leave gaps can signal parenthood. Patterns of time off/FMLA can align with treatments. It's hard to choose relevant inputs without choosing revealing inputs.
The real-real issue is how many people don't bloody understand what AI/ML are, but are making huge decisions about where they are appropriate to use.
I can't count how many times I've heard "Let's add AI to this page!" was requested from non-tech execs in the last year, not knowing what does or doesn't work. Our most successful analytical report runs a simple 10-rule heuristic and nobody is the wiser.
So yeah, people trying to inject AI into hiring/firing. The people who did inject AI into predicting criminality. It all boils down to negligence, ignorance of your tools.
Exactly. The "black box" nature of these systems should be a red flag for any practical usages.