81
AI Error May Have Contributed to Girl's School Bombing in Iran
(thisweekinworcester.com)
Posts from the RSS Feed of HackerNews.
The feed sometimes contains ads and posts that have been removed by the mod team at HN.
Counterargument: 99% of non-hardware-issue-related "misbehaviour" of computers is actually PEBKAC.
Your failure to use a computer =/= computer failing you
The problem; a school full of kids gets bombed into nonexistence. This exists as a result of a program working as intended; A neural network designed by a militaristic surveillance regime to harvest information and spit out 'likely' targets for subjugation, that by design cannot know any part of reality outside of the information it's given, sits pretty fucking far away from the keyboard.
These AI systems are controlled. They are managed. They are being used as a shield and a scapegoat by the people that developed them and sold them to the militaries of the world as a perfect target acquisition solution, when it has proven time and time again that it can't even tell the difference between people or do simple tasks without making shit up.
I've seen this same attitude of "Well there's no way that the way I use an AI is bad in any way, must be a you problem." when talking about a LLM making someone go off the rails or killing themselves and I'm tired of hearing it used as an argument.
Exactly, the invention of the "Corporation" under Capitalism served as a means to negate economic responsibility now they have invented AI to negate operative responsibility.
I feel like the people who created and sold these programs should be considered no different than people who create a biological or nuclear weapon of mass destruction. It's working as intended and the people who created, enabled, and used it should be held accountable.
But destruction is not the fault of the technology, it's the fault of the people who used it to create a weapon of mass destruction while fighting global AI treaties and regulations in favor of greed and power.
Nuclear material has the potential to create something that can destroy the world, but it also has the potential to create something that could save humanity depending on how it's used by the people who possess the material.
Biolabs have used pathogens with pandemic potential to make weapons that destroy, but they also created the first vaccines against those pathogens, and eventually developed methods to create non-live vaccines.
Proper use of AI would require transparency and regulations that place the good of all humanity before the good of the individual nation or corporation. It would be difficult to achieve, but not impossible. Destructive use seems less inherent to the technology itself than to human traits like greed and selfishness being permitted by society.
The LLMs that are designed to manipulate people to keep using a product are somewhere between cigarettes/gambling and a gun. I think they're definitely harmful and require regulations and restrictions. At the bare minimum I think there should be some kind of mandatory warning label, or link to reach out for help always included at the bottom of the screen just to remind people of the reality of what they're using when they use it.
I honestly kind of hate them, but I also don't think we need to try and banish them from society even if they don't really have the same potential for improving humanity. I think at best they serve as time savers the same way using a calculator to do simple math saves us time, but also makes us a little dumber/less skilled in the long run.