45
you are viewing a single comment's thread
view the rest of the comments
[-] mrbeano@lemm.ee 1 points 6 months ago

Just like a "corporate veil" I'm sure this will get spun into a way to avoid responsibility. I mean, it shouldn't, but that's precisely why it will.

"Your honor, Alphabet Inc. deeply regrets the misidentification & destruction of a civilian area, but the AI made an honest mistake. It's unreasonable to expect manual review of EVERY neighborhood we target."

[-] lvxferre@mander.xyz 2 points 6 months ago

This is likely true for what we already see. Fuck, people use even dumb word filters to avoid responsibility! (Cough Reddit mods "I didn't do it, AutoMod did it" cough cough)

That said this specific problem could be solved by AGI or another truly intelligent system. My concern is more like the AGI knowingly bombing a civilian neighbourhood, because it claims to be better for everyone else, due to the lack of morality. That would be way, waaaaay worse than the false positives like in your example.

[-] mrbeano@lemm.ee 2 points 6 months ago

Ugh, yes, the machines "know what's best".

I was just assuming it would be used for blame management, regardless if it was an accident or not.

"See, it wasn't me! It was ScapegoatGPT!"

this post was submitted on 03 Feb 2025
45 points (97.9% liked)

Hacker News

2200 readers
613 users here now

Posts from the RSS Feed of HackerNews.

The feed sometimes contains ads and posts that have been removed by the mod team at HN.

founded 10 months ago
MODERATORS