8
submitted 1 month ago by ad_on_is@lemm.ee to c/technology@lemmy.world

Per one tech forum this week: “Google has quietly installed an app on all Android devices called ‘Android System SafetyCore’. It claims to be a ‘security’ application, but whilst running in the background, it collects call logs, contacts, location, your microphone, and much more making this application ‘spyware’ and a HUGE privacy concern. It is strongly advised to uninstall this program if you can. To do this, navigate to 'Settings’ > 'Apps’, then delete the application.”

top 33 comments
sorted by: hot top controversial new old
[-] TK420@lemmy.world 4 points 1 month ago

Gimme Linux phone, I’m ready for it.

[-] ad_on_is@lemm.ee 1 points 1 month ago

if there was something that could run android apps virtualized, I'd switch in a heartbeat

[-] ilinamorato@lemmy.world 0 points 1 month ago* (last edited 1 month ago)

The Firefox Phone should've been a real contender. I just want a browser in my pocket that takes good pictures and plays podcasts.

[-] StefanT@lemmy.world 1 points 1 month ago

Unfortunately Mozilla is going the enshittification route more and more. Or good in this case that the Firefox Phone did not take of.

[-] teohhanhui@lemmy.world 2 points 1 month ago
[-] kattfisk@lemmy.dbzer0.com 1 points 1 month ago

To quote the most salient post

The app doesn't provide client-side scanning used to report things to Google or anyone else. It provides on-device machine learning models usable by applications to classify content as being spam, scams, malware, etc. This allows apps to check content locally without sharing it with a service and mark it with warnings for users.

Which is a sorely needed feature to tackle problems like SMS scams

[-] ad_on_is@lemm.ee 0 points 1 month ago

And what exactly does that have to do with GrapheneOS?

[-] teohhanhui@lemmy.world 1 points 1 month ago

Please, read the links. They are the security and privacy experts when it comes to Android. That's their explanation of what this Android System SafetyCore actually is.

[-] SavageCoconut@lemmy.world 1 points 1 month ago

Google says that SafetyCore “provides on-device infrastructure for securely and privately performing classification to help users detect unwanted content. Users control SafetyCore, and SafetyCore only classifies specific content when an app requests it through an optionally enabled feature.”

GrapheneOS — an Android security developer — provides some comfort, that SafetyCore “doesn’t provide client-side scanning used to report things to Google or anyone else. It provides on-device machine learning models usable by applications to classify content as being spam, scams, malware, etc. This allows apps to check content locally without sharing it with a service and mark it with warnings for users.”

But GrapheneOS also points out that “it’s unfortunate that it’s not open source and released as part of the Android Open Source Project and the models also aren’t open let alone open source… We’d have no problem with having local neural network features for users, but they’d have to be open source.” Which gets to transparency again.

[-] Ilovethebomb@lemm.ee 1 points 1 month ago

I've just given it the boot from my phone.

It doesn't appear to have been doing anything yet, but whatever.

[-] mp04610@lemm.ee 1 points 1 month ago
[-] hector@sh.itjust.works 1 points 1 month ago

Thanks for the link, this is impressive because this really has all the trait of spyware; apparently it installs without asking for permission ?

[-] Moose@moose.best 1 points 1 month ago

Yup, heard about it a week or two ago. Found it installed on my Samsung phone, it never asked for permissions or gave any info that it was added to my phone.

[-] Armand1@lemmy.world 1 points 1 month ago* (last edited 1 month ago)

For people who have not read the article:

Forbes states that there is no indication that this app can or will "phone home".

Its stated use is for other apps to scan an image they have access to find out what kind of thing it is (known as "classification"). For example, to find out if the picture you've been sent is a dick-pick so the app can blur it.

My understanding is that, if this is implemented correctly (a big 'if') this can be completely safe.

Apps requesting classification could be limited to only classifying files that they already have access to. Remember that android has a concept of "scoped storage" nowadays that let you restrict folder access. If this is the case, well it's no less safe than not having SafetyCore at all. It just saves you space as companies like Signal, WhatsApp etc. no longer need to train and ship their own machine learning models inside their apps, as it becomes a common library / API any app can use.

It could, of course, if implemented incorrectly, allow apps to snoop without asking for file access. I don't know enough to say.

Besides, you think that Google isn't already scanning for things like CSAM? It's been confirmed to be done on platforms like Google Photos well before SafetyCore was introduced, though I've not seen anything about it being done on devices yet (correct me if I'm wrong).

[-] lepinkainen@lemmy.world 0 points 1 month ago* (last edited 1 month ago)

This is EXACTLY what Apple tried to do with their on-device CSAM detection, it had a ridiculous amount of safeties to protect people’s privacy and still it got shouted down

I’m interested in seeing what happens when Holy Google, for which most nerds have a blind spot, does the exact same thing

EDIT: from looking at the downvotes, it really seems that Google can do no wrong 😆 And Apple is always the bad guy in lemmy

[-] Ulrich@feddit.org 1 points 1 month ago

Google did end up doing exactly that, and what happened was, predictably, people were falsely accused of child abuse and CSAM.

[-] Natanael@infosec.pub 1 points 1 month ago* (last edited 1 month ago)

Apple had it report suspected matches, rather than warning locally

It got canceled because the fuzzy hashing algorithms turned out to be so insecure it's unfixable (easy to plant false positives)

[-] noxypaws@pawb.social 1 points 1 month ago

it had a ridiculous amount of safeties to protect people’s privacy

The hell it did, that shit was gonna snitch on its users to law enforcement.

[-] lepinkainen@lemmy.world 0 points 1 month ago

Nope.

A human checker would get a reduced quality copy after multiple CSAM matches. No police was to be called if the human checker didn’t verify a positive match

Your idea of flooding someone with fake matches that are actually cat pics wouldn’t have worked

[-] noxypaws@pawb.social 1 points 1 month ago

That's a fucking wiretap, yo

[-] mctoasterson@reddthat.com 1 points 1 month ago

People don't seem to understand the risks presented by normalizing client-side scanning on closed source devices. Think about how image recognition works. It scans image content locally and matches to keywords or tags, describing the person, objects, emotions, and other characteristics. Even the rudimentary open-source model on an immich deployment on a Raspberry Pi can process thousands of images and make all the contents searchable with alarming speed and accuracy.

So once similar image analysis is done on a phone locally, and pre-encryption, it is trivial for Apple or Google to use that for whatever purposes their use terms allow. Forget the iCloud encryption backdoor. The big tech players can already scan content on your device pre-encryption.

And just because someone does a traffic analysis of the process itself (safety core or mediaanalysisd or whatever) and shows it doesn't directly phone home, doesn't mean it is safe. The entire OS is closed source, and it needs only to backchannel small amounts of data in order to fuck you over.

Remember the original justification for clientside scanning from Apple was "detecting CSAM". Well they backed away from that line of thinking but they kept all the client side scanning in iOS and Mac OS. It would be trivial for them to flag many other types of content and furnish that data to governments or third parties.

[-] DuskyRo@lemmy.world 1 points 1 month ago

SafetyCore Placeholder so if it ever tries to reinstall itself it will fail due to signature mismatch.

[-] moncharleskey@lemmy.zip 0 points 1 month ago

I struggle with GitHub sometimes. It says to download the apk but I don't see it in the file list. Anyone care to point me in the right direction?

[-] sommerset@thelemmy.club -1 points 1 month ago

Thanks. Just uninstalled. What a cunts

[-] dev_null@lemmy.ml 0 points 1 month ago

Do we have any proof of it doing anything bad?

Taking Google's description of what it is it seems like a good thing. Of course we should absolutely assume Google is lying and it actually does something nefarious, but we should get some proof before picking up the pitchforks.

[-] sommerset@thelemmy.club -1 points 1 month ago* (last edited 1 month ago)

Google is always 100% lying.
There are too many instances to list and I'm not spending 5 hours collecting examples for you.
They removed don't be evil long time ago

[-] _sideffect@lemmy.world -1 points 1 month ago
[-] Albbi@lemmy.ca 0 points 1 month ago

Why are you linking to a known Nazi website?

[-] _sideffect@lemmy.world -1 points 1 month ago

Because that's where I got the info from first? Grow up

[-] RampantParanoia2365@lemmy.world 0 points 1 month ago

....this link is about Safety core. Which weather app?

[-] _sideffect@lemmy.world -1 points 1 month ago

There's another one mentioned in the comments

this post was submitted on 27 Feb 2025
8 points (100.0% liked)

Technology

67990 readers
1191 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS