Gimme Linux phone, I’m ready for it.
if there was something that could run android apps virtualized, I'd switch in a heartbeat
The Firefox Phone should've been a real contender. I just want a browser in my pocket that takes good pictures and plays podcasts.
Unfortunately Mozilla is going the enshittification route more and more. Or good in this case that the Firefox Phone did not take of.
Per one tech forum this week
Stop spreading misinformation.
To quote the most salient post
The app doesn't provide client-side scanning used to report things to Google or anyone else. It provides on-device machine learning models usable by applications to classify content as being spam, scams, malware, etc. This allows apps to check content locally without sharing it with a service and mark it with warnings for users.
Which is a sorely needed feature to tackle problems like SMS scams
And what exactly does that have to do with GrapheneOS?
Please, read the links. They are the security and privacy experts when it comes to Android. That's their explanation of what this Android System SafetyCore actually is.
Google says that SafetyCore “provides on-device infrastructure for securely and privately performing classification to help users detect unwanted content. Users control SafetyCore, and SafetyCore only classifies specific content when an app requests it through an optionally enabled feature.”
GrapheneOS — an Android security developer — provides some comfort, that SafetyCore “doesn’t provide client-side scanning used to report things to Google or anyone else. It provides on-device machine learning models usable by applications to classify content as being spam, scams, malware, etc. This allows apps to check content locally without sharing it with a service and mark it with warnings for users.”
But GrapheneOS also points out that “it’s unfortunate that it’s not open source and released as part of the Android Open Source Project and the models also aren’t open let alone open source… We’d have no problem with having local neural network features for users, but they’d have to be open source.” Which gets to transparency again.
I've just given it the boot from my phone.
It doesn't appear to have been doing anything yet, but whatever.
The app can be found here: https://play.google.com/store/apps/details?id=com.google.android.safetycore
The app reviews are a good read.
Thanks for the link, this is impressive because this really has all the trait of spyware; apparently it installs without asking for permission ?
Yup, heard about it a week or two ago. Found it installed on my Samsung phone, it never asked for permissions or gave any info that it was added to my phone.
For people who have not read the article:
Forbes states that there is no indication that this app can or will "phone home".
Its stated use is for other apps to scan an image they have access to find out what kind of thing it is (known as "classification"). For example, to find out if the picture you've been sent is a dick-pick so the app can blur it.
My understanding is that, if this is implemented correctly (a big 'if') this can be completely safe.
Apps requesting classification could be limited to only classifying files that they already have access to. Remember that android has a concept of "scoped storage" nowadays that let you restrict folder access. If this is the case, well it's no less safe than not having SafetyCore at all. It just saves you space as companies like Signal, WhatsApp etc. no longer need to train and ship their own machine learning models inside their apps, as it becomes a common library / API any app can use.
It could, of course, if implemented incorrectly, allow apps to snoop without asking for file access. I don't know enough to say.
Besides, you think that Google isn't already scanning for things like CSAM? It's been confirmed to be done on platforms like Google Photos well before SafetyCore was introduced, though I've not seen anything about it being done on devices yet (correct me if I'm wrong).
This is EXACTLY what Apple tried to do with their on-device CSAM detection, it had a ridiculous amount of safeties to protect people’s privacy and still it got shouted down
I’m interested in seeing what happens when Holy Google, for which most nerds have a blind spot, does the exact same thing
EDIT: from looking at the downvotes, it really seems that Google can do no wrong 😆 And Apple is always the bad guy in lemmy
Google did end up doing exactly that, and what happened was, predictably, people were falsely accused of child abuse and CSAM.
Apple had it report suspected matches, rather than warning locally
It got canceled because the fuzzy hashing algorithms turned out to be so insecure it's unfixable (easy to plant false positives)
it had a ridiculous amount of safeties to protect people’s privacy
The hell it did, that shit was gonna snitch on its users to law enforcement.
Nope.
A human checker would get a reduced quality copy after multiple CSAM matches. No police was to be called if the human checker didn’t verify a positive match
Your idea of flooding someone with fake matches that are actually cat pics wouldn’t have worked
That's a fucking wiretap, yo
People don't seem to understand the risks presented by normalizing client-side scanning on closed source devices. Think about how image recognition works. It scans image content locally and matches to keywords or tags, describing the person, objects, emotions, and other characteristics. Even the rudimentary open-source model on an immich deployment on a Raspberry Pi can process thousands of images and make all the contents searchable with alarming speed and accuracy.
So once similar image analysis is done on a phone locally, and pre-encryption, it is trivial for Apple or Google to use that for whatever purposes their use terms allow. Forget the iCloud encryption backdoor. The big tech players can already scan content on your device pre-encryption.
And just because someone does a traffic analysis of the process itself (safety core or mediaanalysisd or whatever) and shows it doesn't directly phone home, doesn't mean it is safe. The entire OS is closed source, and it needs only to backchannel small amounts of data in order to fuck you over.
Remember the original justification for clientside scanning from Apple was "detecting CSAM". Well they backed away from that line of thinking but they kept all the client side scanning in iOS and Mac OS. It would be trivial for them to flag many other types of content and furnish that data to governments or third parties.
SafetyCore Placeholder so if it ever tries to reinstall itself it will fail due to signature mismatch.
I struggle with GitHub sometimes. It says to download the apk but I don't see it in the file list. Anyone care to point me in the right direction?
Scroll down to releases.
Thanks. Just uninstalled. What a cunts
Do we have any proof of it doing anything bad?
Taking Google's description of what it is it seems like a good thing. Of course we should absolutely assume Google is lying and it actually does something nefarious, but we should get some proof before picking up the pitchforks.
Google is always 100% lying.
There are too many instances to list and I'm not spending 5 hours collecting examples for you.
They removed don't be evil long time ago
There's this, and another weather app. Uninstall both asap
Why are you linking to a known Nazi website?
Because that's where I got the info from first? Grow up
....this link is about Safety core. Which weather app?
There's another one mentioned in the comments
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.