211
all 22 comments
sorted by: hot top controversial new old
[-] rockSlayer@lemmy.world 53 points 1 year ago* (last edited 1 year ago)

AI shouldn't be anywhere near law enforcement. Including automated patrol software.

[-] EatYouWell@lemmy.world 14 points 1 year ago

It's not AI, though. They're just using buzzwords, because what they described is functionally no different from AFIS. It's just a poorly written algorithm.

[-] rockSlayer@lemmy.world 8 points 1 year ago

I'm aware, but unfortunately I'm not big enough in the tech industry to create differentiating terms. AI is an extremely broad term ranging from literal if-else statements to LLMs and generative AI. Unfortunately the specifics usually get buried in the term

[-] MindSkipperBro12@lemmy.world 3 points 1 year ago

Don’t be scared of the inevitable

[-] Voli@lemmy.ml 46 points 1 year ago

I wish the term Ai would be stopped, because these devices are far from the idea of what ai is.

[-] psivchaz@reddthat.com 22 points 1 year ago

I always thought machine learning was descriptive and made sense. I guess it just didn't get investors erect enough.

[-] Floey@lemm.ee 6 points 1 year ago

AI has been used to refer to all kinds of dynamic programming in the history of computation. Algebraic solvers, edge detection, fuzzy decision systems, player programs for video games and tabletop games. So when you say AI is this or that you are being rather prescriptivist about it.

The problem with AI and ML is more one of it being presented to the public by grifters as a magical one stop solution to almost any problem. What term was used hardly matters, it was the propaganda that carried the term. It would be like saying the name Nike is the reason for the shoe brand's success and not it's marketing.

So discredit the grifters, and if you want to destroy the term then look to dilute it by using it to describe even more things. It was never really a useful term to begin with. I'll leave you with this quote

A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labelled AI anymore.

[-] NightOwl@lemmy.one 6 points 1 year ago

Yeah, things that weren't called AI years back are just getting called AI now.

[-] Gurfaild@feddit.de 2 points 1 year ago

"AI" was always an imprecise term - even compilers used to be called AI once

[-] Phanatik@kbin.social 5 points 1 year ago

It's almost like the incessant marketing of standard optimisation algorithms as artificial intelligence has diluted the tech industry with meaningless buzzwords.

[-] catsup@lemmy.one 41 points 1 year ago* (last edited 1 year ago)

TLDR:

In 2018, a man in a baseball cap stole thousands of dollars worth of watches from a store in central Detroit.

The AI was trained on a database of mostly white people The photos of people of colour in the dataset were generally of worse quality, as default camera settings are often not optimised to capture darker skin tones.

Mr Williams' photo didn't come up first. In fact, it was the ninth-most-probable match.

Regardless...

Officers drove to Mr Williams' house and handcuffed him.

They arrested him in front of his five and two-year-old kids...


Ai with bad training data + lazy cops who didn't learn how to use the tools they were given = this mess

[-] chicken@lemmy.dbzer0.com 10 points 1 year ago

Sounds like the same old law enforcement trend; technology deployed as an excuse generator.

[-] bear@slrpnk.net 32 points 1 year ago* (last edited 1 year ago)

The computer didn't get it wrong; the computer did exactly what it was programmed to do. Blaming the computer implies that this can be solved by fixing the computer, that it "just wasn't good enough yet", when it was the humans who actually did it. It was the humans who were supposed to exercise their judgment that got it wrong. You can't fix that from the computer.

[-] AngryCommieKender@lemmy.world 1 points 1 year ago

PICNIC, or PEBKAC.

[-] uriel238@lemmy.blahaj.zone 15 points 1 year ago

Ever since we let law enforcement use facial recognition technology, they've been arresting people for false positives, sometimes for long periods of time.

it's not just camera problems and being poorly trained regarding non-whites, but that people actually look too much alike, especially when using the tech on blurry low-res security footage,

[-] echodot@feddit.uk 6 points 1 year ago* (last edited 1 year ago)

I used to work in security camera monitoring and I used to think I don't understand why insurers will touch some of these companies with an electrified cattle prod.

They will be pretty high value asset companies with valuable stuff on premises that could be stolen, construction equipment, medical equipment, guns, cars, steel copper lead etc. and their security cameras would max out at 720p have a giant spider web on them without fail and would invariably be on some wobbly pole somewhere that was blowing around in the wind causing 300 false positives a minute. We literally used to switch those cameras off.

Why don't they insist on equipment that didn't cost the company $4.50 from Walmart?

The only cameras we used to work with that were actually any good were the number plate recognition cameras, but they were specialist and were absolutely useless for anything else other than number plate recognition. But boy did they get you that number plate.

[-] spudwart@spudwart.com 5 points 1 year ago

The System is functioning as Intended.

[-] autotldr@lemmings.world 4 points 1 year ago

This is the best summary I could come up with:


Facial recognition could analyse a blown-up still taken from a security tape, sift through a database of millions of driver licence photos, and identify the person who did the crime.

Months later, the facial recognition system used by Detroit police combed through its database of millions of driver licences to identify the criminal in the grainy security tapes.

By January 2020, as Mr Williams had his mug shot taken in the Detroit detention centre, civil liberties groups knew that black people were being falsely accused due to this technology.

It would give law enforcement and security agencies quick access to up to 100 million facial images from databases around Australia, including driver licences and passport photos.

That didn't stop the then government from ploughing ahead with its planned national facial recognition system, says Edward Santow, an expert on responsible AI at the University of Technology Sydney, and the Australian Human Rights Commissioner at the time.

Despite this, last month Senate estimates heard the federal police tested a second commercial one-to-many face matching service, Pim Eyes, earlier this year.


The original article contains 1,870 words, the summary contains 162 words. Saved 91%. I'm a bot and I'm open source!

this post was submitted on 03 Nov 2023
211 points (100.0% liked)

Privacy Guides

16263 readers
76 users here now

In the digital age, protecting your personal information might seem like an impossible task. We’re here to help.

This is a community for sharing news about privacy, posting information about cool privacy tools and services, and getting advice about your privacy journey.


You can subscribe to this community from any Kbin or Lemmy instance:

Learn more...


Check out our website at privacyguides.org before asking your questions here. We've tried answering the common questions and recommendations there!

Want to get involved? The website is open-source on GitHub, and your help would be appreciated!


This community is the "official" Privacy Guides community on Lemmy, which can be verified here. Other "Privacy Guides" communities on other Lemmy servers are not moderated by this team or associated with the website.


Moderation Rules:

  1. We prefer posting about open-source software whenever possible.
  2. This is not the place for self-promotion if you are not listed on privacyguides.org. If you want to be listed, make a suggestion on our forum first.
  3. No soliciting engagement: Don't ask for upvotes, follows, etc.
  4. Surveys, Fundraising, and Petitions must be pre-approved by the mod team.
  5. Be civil, no violence, hate speech. Assume people here are posting in good faith.
  6. Don't repost topics which have already been covered here.
  7. News posts must be related to privacy and security, and your post title must match the article headline exactly. Do not editorialize titles, you can post your opinions in the post body or a comment.
  8. Memes/images/video posts that could be summarized as text explanations should not be posted. Infographics and conference talks from reputable sources are acceptable.
  9. No help vampires: This is not a tech support subreddit, don't abuse our community's willingness to help. Questions related to privacy, security or privacy/security related software and their configurations are acceptable.
  10. No misinformation: Extraordinary claims must be matched with evidence.
  11. Do not post about VPNs or cryptocurrencies which are not listed on privacyguides.org. See Rule 2 for info on adding new recommendations to the website.
  12. General guides or software lists are not permitted. Original sources and research about specific topics are allowed as long as they are high quality and factual. We are not providing a platform for poorly-vetted, out-of-date or conflicting recommendations.

Additional Resources:

founded 2 years ago
MODERATORS