74
top 20 comments
sorted by: hot top controversial new old
[-] FatVegan@leminal.space 1 points 23 minutes ago

There are no mistakes, just happy accidents

[-] Ildsaye@hexbear.net 2 points 27 minutes ago* (last edited 27 minutes ago)

It's not complicated; if you 'delegate' a crime to the yes machine, you remain the criminal.

[-] FukOui@lemmy.zip 19 points 6 hours ago* (last edited 6 hours ago)

Nope, this was 100% human intent.

They're making AI an excuse for their cockups. I believe that is why the pentagon is partnering with OpenAI/ anthropic to be their scapegoats.

Its crazy how they think that people are stupid/ sanewashing us in the face.

[-] Keeponstalin@lemmy.world 6 points 1 hour ago

Tbe entire point of military use of AI is 'plausible' deniability, so they can deflect responsibility as they increase their civilian targeting. Israel mechanized this strategy in Gaza with their Lavender and Where's Daddy AI systems

[-] Thordros@hexbear.net 7 points 6 hours ago

You mean to say that the machine that always agrees with you and is confidently wrong all the time did what it was built to do?

[-] NihilsineNefas@slrpnk.net 36 points 9 hours ago* (last edited 9 hours ago)

And my favourite edit:

Just replace management with military and you've got what ishell and the merkins with the support of the epstein class are doing

[-] AnUnusualRelic@lemmy.world 7 points 6 hours ago

Which of course was turned into:

"A computer can never be held accountable, that's why it's perfect for management decisions"

[-] xxce2AAb@feddit.dk 9 points 7 hours ago* (last edited 7 hours ago)

Quite possibly one of the best things that ever came out of IBM, and one my my favorite products of 1979 (along with "Alien"). Of course, Weisenbaum had a thing or two to say about the ELIZA Effect previously in 1966. As usual, nobody listened.

[-] bennieandthez@lemmygrad.ml 6 points 7 hours ago

How convenient! Make no mistake, it was completely intended.

[-] it_depends_man@lemmy.world 21 points 9 hours ago

Wow that's crazy. Software problem. Nothing you can do about that I guess. It is what it is. Could happen to anyone, really. Those who have never murdered a school full of children can throw the first stone and such.

[-] NihilsineNefas@slrpnk.net 2 points 9 hours ago

Guess Iran really does get to throw the first stone.

[-] stoy@lemmy.zip 17 points 9 hours ago

I remember the countless excuses for similar attacks on schools and hospitals.

The fault is either organizational where no single person can be blamed, or it is the action of a low ranking individual who takes the fall.

Often it is the fault of the enemy, who "used a children's hospital as a meeting point", so it is clearly their fault!

Just read up on the 2015 Kunduz Hospital Airstrike for one example, the Lions Lead By Donkeys podcast has a good episode on it.

Of course it fucking did. This is what happens when you rely on AI to automate your war crimes.

The Lavender precedent: automated kill lists and the limits of International Humanitarian Law

In late 2024, the UN verified that nearly 70% of those killed were women and children (Farge 2024). AOAV’s data on civilian harm following explosive weapon use in Gaza puts that percentile even higher. Evidence recorded in the classified Israeli military database in May 2025 revealed that only 17% of the 53,000 Palestinians killed in Gaza were combatants. This implies that 83% were civilians (Graham-Harrison & Abraham 2025).

[-] takeda@lemmy.dbzer0.com 6 points 9 hours ago

Well, those chat bots failed to run a vending machine, but they want to use them for autonomous weapons.

IMO they are setting things up so if a tactical nuke somehow makes it to the battlefield nobody will be responsible.

[-] Tyrq@lemmy.dbzer0.com 9 points 9 hours ago

The only thing this tech is good for is plausible deniability

[-] Bazell@lemmy.zip 5 points 9 hours ago

LOL. I am pretty sure that this wasn't an AI mistake. And wasn't a mistake at all. It was intentional.

this post was submitted on 07 Mar 2026
74 points (96.2% liked)

Hacker News

4469 readers
525 users here now

Posts from the RSS Feed of HackerNews.

The feed sometimes contains ads and posts that have been removed by the mod team at HN.

Source of the RSS Bot

founded 2 years ago
MODERATORS