523
top 31 comments
sorted by: hot top controversial new old
[-] bleistift2@sopuli.xyz 87 points 2 months ago

10% false positives is an enormous rate given how Police like to ‘find’ evidence and ‘elicit’ confessions.

[-] spankmonkey@lemmy.world 33 points 2 months ago

It isn't predicting individual crimes, just pattern recognition and extrapolation like how the weather is predicted.

"There are on average 4 shootings in November in this general area so there probably will be 4 again this year." is the kind of prediction that AI is making.

[-] Death_Equity@lemmy.world 14 points 2 months ago

Predictions with hallucinations is exactly how an effective justice system works.

[-] echodot@feddit.uk 1 points 2 months ago

So they are using it to try and decide on deployment?

If that is all they're using it for I guess it isn't too bad. As long as it isn't accusing individuals of planning to commit a crime with a zero evidence.

[-] spankmonkey@lemmy.world 3 points 2 months ago* (last edited 2 months ago)

It is probably going to be used to justify the disproportionate police attention paid to minority communities and to justify activities similar to stop and frisk.

[-] rottingleaf@lemmy.world 1 points 2 months ago

No, it's bad, because ultimately it's not leading anywhere, such tools can't be used by unqualified people not understanding how they work (not many qualified people do too, my teamlead at work, for example, is enthusiastic and just doesn't seem to hear arguments against, at least those I can make with my ADHD, that is, avoiding detailed explanations to the bone).

If ultimately it's not applicable where people want to apply it, it shouldn't even be tested.

This is giving such applications credibility.

It's the slippery slope that some people think doesn't exist. Actually they exist everywhere.

[-] ininewcrow@lemmy.ca 13 points 2 months ago

lol .... if AI is wrong .... we'll make it right

[-] lugal@sopuli.xyz 4 points 2 months ago

That's in a punitive system. Used in a transformative/preventive manner (which it will not), this can actually save lives and help people in need

[-] TehBamski@lemmy.world 28 points 2 months ago

Yeah, yeah. This is nothing. This is just a Minor(ity) Report.

[-] disguy_ovahea@lemmy.world 6 points 2 months ago

Yup. About as exciting as triplets in a hot tub.

[-] Brickhead92@lemmy.world 5 points 2 months ago

But what if it was some kind of hot tub Time machine... Turns to look at camera

[-] sundrei@lemmy.sdf.org 3 points 2 months ago

The butcher, the baker, the candlestick-maker... of course, they're triplets, it all fits!

[-] Pilferjinx@lemmy.world 20 points 2 months ago

Israel and China implement sophisticated algorithms to suppress Palestinians and Uyghurs with severe effectiveness. Don't take this tech lightly.

[-] ProgrammingSocks@pawb.social 22 points 2 months ago

If it were my choice I'd have it banned. "90%" accuracy? So 10/100 predictions result in an innocent person getting surveiled for literally no reason? Absolutely the fuck not.

[-] aeshna_cyanea@lemm.ee 6 points 2 months ago* (last edited 2 months ago)

Idk about china but Israel carpet bombs apartment buildings. You don't need precision ai for that

[-] Intergalactic@lemmy.world 13 points 2 months ago

Generative AI ≠ actual AI.

[-] dragonfucker@lemmy.nz 2 points 2 months ago

Yeah! Real AI is expert systems and fuzzy logic! Generative AI's capabilities and intelligence fail in comparison to Fuji Electric's advanced temperature control systems!

[-] Leate_Wonceslace@lemmy.dbzer0.com -1 points 2 months ago
[-] A_Union_of_Kobolds@lemmy.world 1 points 2 months ago

Because language models aren't sentient?

[-] echodot@feddit.uk 3 points 2 months ago

Is that a requirement of AI?

[-] dragonfucker@lemmy.nz 1 points 2 months ago

Nobody but science fiction writers has ever said AI is sentient. You watch too many movies.

[-] andrew_bidlaw@sh.itjust.works 10 points 2 months ago

Do they have skull measurements in their dataset? It's predestined to reproduce and cement existing biases.

[-] embed_me@programming.dev 3 points 2 months ago

Hey now, phrenology is a well documented scientific field

[-] psycho_driver@lemmy.world 10 points 2 months ago

I'm pretty sure currrent techBrocracy will implement it such:

if (isBlack) { willCrime = True; }

[-] kryptonidas@lemmings.world 7 points 2 months ago

I have seen it before, and I liked it. “Person of Interest” on CBS. Though in real life, I’m not a fan.

[-] Lemjukes@lemm.ee 3 points 2 months ago

You are being watched.

[-] TheBat@lemmy.world 2 points 2 months ago

There are too many Samaritans already but no one has made The Machine.

[-] frustrated_phagocytosis@fedia.io 4 points 2 months ago

I mean I can also create situations where I can 100% predict crime, like an old school protection racket.

[-] kubica@fedia.io 3 points 2 months ago

Hey could you just scan me before I continue putting effort in earning money?

[-] Zorque@lemmy.world 1 points 2 months ago

I thought the hand thing was (mostly) a thing of the past.

[-] Shardikprime@lemmy.world -1 points 2 months ago

I mean you can train ai to look for really early signs of multiple diseases. It can predict the future, sort of

this post was submitted on 03 Dec 2024
523 points (98.5% liked)

Microblog Memes

6444 readers
1223 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 2 years ago
MODERATORS