84
AI Error May Have Contributed to Girl's School Bombing in Iran
(thisweekinworcester.com)
Posts from the RSS Feed of HackerNews.
The feed sometimes contains ads and posts that have been removed by the mod team at HN.
The problem; a school full of kids gets bombed into nonexistence. This exists as a result of a program working as intended; A neural network designed by a militaristic surveillance regime to harvest information and spit out 'likely' targets for subjugation, that by design cannot know any part of reality outside of the information it's given, sits pretty fucking far away from the keyboard.
These AI systems are controlled. They are managed. They are being used as a shield and a scapegoat by the people that developed them and sold them to the militaries of the world as a perfect target acquisition solution, when it has proven time and time again that it can't even tell the difference between people or do simple tasks without making shit up.
I've seen this same attitude of "Well there's no way that the way I use an AI is bad in any way, must be a you problem." when talking about a LLM making someone go off the rails or killing themselves and I'm tired of hearing it used as an argument.
I think you're overshooting with your response.
My statement had nothing to do with LLMs, or targeting Iranian schools. I was simply responding to the previous statement that a computer will fail you.
Computers don't fail people. They're precision instruments, and anything short of a cosmic ray bit-flip, or hardware failure, will not result in a failed execution of its instructions. Therefore the computer didn't fail YOU - YOU failed to provide adequate instructions for it to do what you wanted.
Even in your reply you specified that these neural nets are being given specific information to train target selection on. Sure, the AI is nowhere near the keyboard (well, it actually is, but more on that later), but then again, WHO fed the AI the training data? WHOSE decision resulted in the AI spitting out a school as a viable bombing target? Ultimately a human sits at the top of the chain, even if it was multiple automated AI systems that collated, labelled, sorted and managed the training dataset. The reason why those highly advanced neural nets didn't work the way they were expected to (even accounting for the NN black box effect), is, ultimately, down to human failure.
Mind you by failure here I am simply talking about expected outcome vs what happened. The expected outcome being the AI providing 100% viable combat targets, active combatants, military bases, etc., not a school full of children. Because while the cheetoh in chief might be a raging narcissistic child rapist bastard, I doubt most of the rest of the US armed forces agrees with bombing children, so the original goal of the AI had to be to provide said viable targets. Therefore there had to be a human component that provided the data to skew the targeting, and that human component was most definitely sitting in a chair in front of a keyboard...
Exactly, the invention of the "Corporation" under Capitalism served as a means to negate economic responsibility now they have invented AI to negate operative responsibility.
I feel like the people who created and sold these programs should be considered no different than people who create a biological or nuclear weapon of mass destruction. It's working as intended and the people who created, enabled, and used it should be held accountable.
But destruction is not the fault of the technology, it's the fault of the people who used it to create a weapon of mass destruction while fighting global AI treaties and regulations in favor of greed and power.
Nuclear material has the potential to create something that can destroy the world, but it also has the potential to create something that could save humanity depending on how it's used by the people who possess the material.
Biolabs have used pathogens with pandemic potential to make weapons that destroy, but they also created the first vaccines against those pathogens, and eventually developed methods to create non-live vaccines.
Proper use of AI would require transparency and regulations that place the good of all humanity before the good of the individual nation or corporation. It would be difficult to achieve, but not impossible. Destructive use seems less inherent to the technology itself than to human traits like greed and selfishness being permitted by society.
The LLMs that are designed to manipulate people to keep using a product are somewhere between cigarettes/gambling and a gun. I think they're definitely harmful and require regulations and restrictions. At the bare minimum I think there should be some kind of mandatory warning label, or link to reach out for help always included at the bottom of the screen just to remind people of the reality of what they're using when they use it.
I honestly kind of hate them, but I also don't think we need to try and banish them from society even if they don't really have the same potential for improving humanity. I think at best they serve as time savers the same way using a calculator to do simple math saves us time, but also makes us a little dumber/less skilled in the long run.