8
submitted 1 month ago by ad_on_is@lemm.ee to c/technology@lemmy.world

Per one tech forum this week: “Google has quietly installed an app on all Android devices called ‘Android System SafetyCore’. It claims to be a ‘security’ application, but whilst running in the background, it collects call logs, contacts, location, your microphone, and much more making this application ‘spyware’ and a HUGE privacy concern. It is strongly advised to uninstall this program if you can. To do this, navigate to 'Settings’ > 'Apps’, then delete the application.”

you are viewing a single comment's thread
view the rest of the comments
[-] Armand1@lemmy.world 1 points 1 month ago* (last edited 1 month ago)

For people who have not read the article:

Forbes states that there is no indication that this app can or will "phone home".

Its stated use is for other apps to scan an image they have access to find out what kind of thing it is (known as "classification"). For example, to find out if the picture you've been sent is a dick-pick so the app can blur it.

My understanding is that, if this is implemented correctly (a big 'if') this can be completely safe.

Apps requesting classification could be limited to only classifying files that they already have access to. Remember that android has a concept of "scoped storage" nowadays that let you restrict folder access. If this is the case, well it's no less safe than not having SafetyCore at all. It just saves you space as companies like Signal, WhatsApp etc. no longer need to train and ship their own machine learning models inside their apps, as it becomes a common library / API any app can use.

It could, of course, if implemented incorrectly, allow apps to snoop without asking for file access. I don't know enough to say.

Besides, you think that Google isn't already scanning for things like CSAM? It's been confirmed to be done on platforms like Google Photos well before SafetyCore was introduced, though I've not seen anything about it being done on devices yet (correct me if I'm wrong).

[-] lepinkainen@lemmy.world 0 points 1 month ago* (last edited 1 month ago)

This is EXACTLY what Apple tried to do with their on-device CSAM detection, it had a ridiculous amount of safeties to protect people’s privacy and still it got shouted down

I’m interested in seeing what happens when Holy Google, for which most nerds have a blind spot, does the exact same thing

EDIT: from looking at the downvotes, it really seems that Google can do no wrong 😆 And Apple is always the bad guy in lemmy

[-] Ulrich@feddit.org 1 points 1 month ago

Google did end up doing exactly that, and what happened was, predictably, people were falsely accused of child abuse and CSAM.

[-] noxypaws@pawb.social 1 points 1 month ago

it had a ridiculous amount of safeties to protect people’s privacy

The hell it did, that shit was gonna snitch on its users to law enforcement.

[-] lepinkainen@lemmy.world 0 points 1 month ago

Nope.

A human checker would get a reduced quality copy after multiple CSAM matches. No police was to be called if the human checker didn’t verify a positive match

Your idea of flooding someone with fake matches that are actually cat pics wouldn’t have worked

[-] noxypaws@pawb.social 1 points 1 month ago

That's a fucking wiretap, yo

[-] Natanael@infosec.pub 1 points 1 month ago* (last edited 1 month ago)

Apple had it report suspected matches, rather than warning locally

It got canceled because the fuzzy hashing algorithms turned out to be so insecure it's unfixable (easy to plant false positives)

this post was submitted on 27 Feb 2025
8 points (100.0% liked)

Technology

68247 readers
3066 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS