Per one tech forum this week: “Google has quietly installed an app on all Android devices called ‘Android System SafetyCore’. It claims to be a ‘security’ application, but whilst running in the background, it collects call logs, contacts, location, your microphone, and much more making this application ‘spyware’ and a HUGE privacy concern. It is strongly advised to uninstall this program if you can. To do this, navigate to 'Settings’ > 'Apps’, then delete the application.”
SafetyCore Placeholder so if it ever tries to reinstall itself it will fail due to signature mismatch.
I struggle with GitHub sometimes. It says to download the apk but I don’t see it in the file list. Anyone care to point me in the right direction?
Scroll down to releases.
There’s an app called obtainium that let’s you link the main page of github apps and manages both the download, the instalation and the updates of those apps.
Great if you want the latest software directly from the source.
I didn’t understand the value of fdroid all since it feels like a web wrapper. Thanks to you finally pulled the trigger on Obtanium. Omg that’s simple af
It’s a web wrapper that points to a non-Google software repo.
The non-Google software repo is the important part, the interface can be bad as long as it can install software.
I use Obtanium too, but fDroid is my first stop when I need an app. Google’s Play store is a last resort.
Droid-ify and Neo-Store are alternative clients for the F-Droid repository (and other repos), that you may like better than the official client. But yeah, Obtainium is indeed simple and it’s powerful if you already know exactly which app you want to install (rather than searching for relevant options in some repositories).
deleted by creator
if there was something that could run android apps virtualized, I’d switch in a heartbeat
The Firefox Phone should’ve been a real contender. I just want a browser in my pocket that takes good pictures and plays podcasts.
Unfortunately Mozilla is going the enshittification route more and more. Or good in this case that the Firefox Phone did not take of.
Per one tech forum this week
Stop spreading misinformation.
To quote the most salient post
The app doesn’t provide client-side scanning used to report things to Google or anyone else. It provides on-device machine learning models usable by applications to classify content as being spam, scams, malware, etc. This allows apps to check content locally without sharing it with a service and mark it with warnings for users.
Which is a sorely needed feature to tackle problems like SMS scams
Why do you need machine learning for detecting scams?
Is someone in 2025 trying to help you out of the goodness of their heart? No. Move on.
if the cellular carriers were forced to verify that caller-ID (or SMS equivalent) was accurate SMS scams would disappear (or at least be weaker). Google shouldn’t have to do the job of the carriers, and if they wanted to implement this anyway they should let the user choose what service they want to perform the task similar to how they let the user choose which “Android system WebView” should be used.
No, that wouldn’t make much difference. I don’t think I’ve seen a real world attack via SMS that even bothered to “forge” the from-field. People are used to getting texts from unknown numbers.
And how would you possibly implement this supposed “caller-id” for a field that doesn’t even have to be set to a number?
caller id is the thing that tells you the number. it isn’t cheap to forge, but it’s the only way a scan could reasonably effect anyone with more than half a brain. there is never a reason to send information to an unknown SMS number, or click on a link from a text message from an unknown number.
And what exactly does that have to do with GrapheneOS?
Please, read the links. They are the security and privacy experts when it comes to Android. That’s their explanation of what this Android System SafetyCore actually is.
Google says that SafetyCore “provides on-device infrastructure for securely and privately performing classification to help users detect unwanted content. Users control SafetyCore, and SafetyCore only classifies specific content when an app requests it through an optionally enabled feature.”
GrapheneOS — an Android security developer — provides some comfort, that SafetyCore “doesn’t provide client-side scanning used to report things to Google or anyone else. It provides on-device machine learning models usable by applications to classify content as being spam, scams, malware, etc. This allows apps to check content locally without sharing it with a service and mark it with warnings for users.”
But GrapheneOS also points out that “it’s unfortunate that it’s not open source and released as part of the Android Open Source Project and the models also aren’t open let alone open source… We’d have no problem with having local neural network features for users, but they’d have to be open source.” Which gets to transparency again.
Google says that SafetyCore “provides on-device infrastructure for securely and privately performing classification to help users detect unwanted content
Cheers Google but I’m a capable adult, and able to do this myself.
For people who have not read the article:
Forbes states that there is no indication that this app can or will “phone home”.
Its stated use is for other apps to scan an image they have access to find out what kind of thing it is (known as "classification"). For example, to find out if the picture you’ve been sent is a dick-pick so the app can blur it.
My understanding is that, if this is implemented correctly (a big ‘if’) this can be completely safe.
Apps requesting classification could be limited to only classifying files that they already have access to. Remember that android has a concept of “scoped storage” nowadays that let you restrict folder access. If this is the case, well it’s no less safe than not having SafetyCore at all. It just saves you space as companies like Signal, WhatsApp etc. no longer need to train and ship their own machine learning models inside their apps, as it becomes a common library / API any app can use.
It could, of course, if implemented incorrectly, allow apps to snoop without asking for file access. I don’t know enough to say.
Besides, you think that Google isn’t already scanning for things like CSAM? It’s been confirmed to be done on platforms like Google Photos well before SafetyCore was introduced, though I’ve not seen anything about it being done on devices yet (correct me if I’m wrong).
This is EXACTLY what Apple tried to do with their on-device CSAM detection, it had a ridiculous amount of safeties to protect people’s privacy and still it got shouted down
I’m interested in seeing what happens when Holy Google, for which most nerds have a blind spot, does the exact same thing
EDIT: from looking at the downvotes, it really seems that Google can do no wrong 😆 And Apple is always the bad guy in lemmy
Apple had it report suspected matches, rather than warning locally
It got canceled because the fuzzy hashing algorithms turned out to be so insecure it’s unfixable (easy to plant false positives)
They were not “suspected” they had to be matches to actual CSAM.
And after that a reduced quality copy was shown to an actual human, not an AI like in Googles case.
So the false positive would slightly inconvenience a human checker for 15 seconds, not get you Swatted or your account closed
Yeah so here’s the next problem - downscaling attacks exists against those algorithms too.
Also, even if those attacks were prevented they’re still going to look through basically your whole album if you trigger the alert
And you’ll again inconvenience a human slightly as they look at a pixelated copy of a picture of a cat or some noise.
No cops are called, no accounts closed
The scaling attack specifically can make a photo sent to you look innocent to you and malicious to the reviewer, see the link above
Google did end up doing exactly that, and what happened was, predictably, people were falsely accused of child abuse and CSAM.
it had a ridiculous amount of safeties to protect people’s privacy
The hell it did, that shit was gonna snitch on its users to law enforcement.
Nope.
A human checker would get a reduced quality copy after multiple CSAM matches. No police was to be called if the human checker didn’t verify a positive match
Your idea of flooding someone with fake matches that are actually cat pics wouldn’t have worked
That’s a fucking wiretap, yo
Doing the scanning on-device doesn’t mean that the findings cannot be reported further. I don’t want others going thru my private stuff without asking - not even machine learning.
I’ve just given it the boot from my phone.
It doesn’t appear to have been doing anything yet, but whatever.
The app can be found here: https://play.google.com/store/apps/details?id=com.google.android.safetycore
The app reviews are a good read.
Thanks for the link, this is impressive because this really has all the trait of spyware; apparently it installs without asking for permission ?
Yup, heard about it a week or two ago. Found it installed on my Samsung phone, it never asked for permissions or gave any info that it was added to my phone.
Smartest Google Defender
broken english too, probably from a paid indian reviewer.
People don’t seem to understand the risks presented by normalizing client-side scanning on closed source devices. Think about how image recognition works. It scans image content locally and matches to keywords or tags, describing the person, objects, emotions, and other characteristics. Even the rudimentary open-source model on an immich deployment on a Raspberry Pi can process thousands of images and make all the contents searchable with alarming speed and accuracy.
So once similar image analysis is done on a phone locally, and pre-encryption, it is trivial for Apple or Google to use that for whatever purposes their use terms allow. Forget the iCloud encryption backdoor. The big tech players can already scan content on your device pre-encryption.
And just because someone does a traffic analysis of the process itself (safety core or mediaanalysisd or whatever) and shows it doesn’t directly phone home, doesn’t mean it is safe. The entire OS is closed source, and it needs only to backchannel small amounts of data in order to fuck you over.
Remember the original justification for clientside scanning from Apple was “detecting CSAM”. Well they backed away from that line of thinking but they kept all the client side scanning in iOS and Mac OS. It would be trivial for them to flag many other types of content and furnish that data to governments or third parties.
It didn’t appear in my apps list so I thought it wasn’t installed. But when I searched for the app name it appears. So be aware.
How did you search for it? Search bar in settings?
On my settings screen I have a search bar at the top
Tried that, found nothing.
you can look it up on your app managment settings too, search for it there.
Found nothing
Not on mine, it doesn’t. I don’t use the Play Store. I don’t have Google Play Services. And I don’t have Google Apps installed. And I’m running Lineage OS. So, fuck you Google.
There’s one in every thread.
“i just needed to pop in here and mention that the terrible/wrong/evil thing in the post doesn’t affect me at all, like it does for you suckers ROFLMFAO…but also: LOL”
Thanks for bringing this up, first I’ve heard of it. Not present on my GrapheneOS pixel, present on stock.
I suppose I should encourage pixel owners to switch from stock to graphene, I know which decide I rather spend time using. GrapheneOS one of course.
I’ve looked into it.l briefly. Did you have any issues switching? I’m concerned about how some apps I need would function.
I switched from a Samsung to a Pixel a couple years ago. I instantly installed GrapheneOS and have loved it ever since. It generally works perfectly normally with the huge background benefit of security and privacy. The only issues I have had is one of my banking apps doesn’t work (but the others work fine) and lack of RCS (but I’m sure it’s coming). In short, highly highly recommend. I will be sticking with GOS for the long term!
I did a fair amount of research before the switch to find alternatives to Google services, some I’ve replaced, others I felt were too much of a hassle for my phone usage.
I’ve kept my original pixel stock, the hardest part about switching this one over was plugging it in and following the instructions.
I’m hoping to get rid of my stock OS pixel soon, it would appear my bank hasn’t blocked it’s app on Graphene, unlike Uber.
For the rest I’ll either buy a cheap af shitbox to use purely for banking and Uber (if it comes to that).
If you’ve any other questions I’m happy to help find then answers with you, feel free to DM me.
Uber works on GrapheneOS
I’ll give it another try then! Last attempt it wouldn’t open.
I’ve got a Pixel 8 Pro and I’m currently using the stock OS. Anything in particular that you miss with Graphene OS?
I still use a stock pixel for work related and daily usage, but the alternatives I’ve found between F-Droid and Aurora store I’ve never felt lacking.
Maybe I’ll finish the switch fully in the coming months.
There’s this, and another weather app. Uninstall both asap
…this link is about Safety core. Which weather app?
There’s another one mentioned in the comments
Removed by mod
Removed by mod
Removed by mod
Removed by mod
Why are you linking to a known Nazi website?
Because that’s where I got the info from first? Grow up
More information: It’s been rolling out to Android 9+ users since November 2024 as a high priority update. Some users are reporting it installs when on battery and off wifi, unlike most apps.
App description on Play store: SafetyCore is a Google system service for Android 9+ devices. It provides the underlying technology for features like the upcoming Sensitive Content Warnings feature in Google Messages that helps users protect themselves when receiving potentially unwanted content. While SafetyCore started rolling out last year, the Sensitive Content Warnings feature in Google Messages is a separate, optional feature and will begin its gradual rollout in 2025. The processing for the Sensitive Content Warnings feature is done on-device and all of the images or specific results and warnings are private to the user.
Description by google Sensitive Content Warnings is an optional feature that blurs images that may contain nudity before viewing, and then prompts with a “speed bump” that contains help-finding resources and options, including to view the content. When the feature is enabled, and an image that may contain nudity is about to be sent or forwarded, it also provides a speed bump to remind users of the risks of sending nude imagery and preventing accidental shares. - https://9to5google.com/android-safetycore-app-what-is-it/
So looks like something that sends pictures from your messages (at least initially) to Google for an AI to check whether they’re “sensitive”. The app is 44mb, so too small to contain a useful ai and I don’t think this could happen on-phone, so it must require sending your on-phone data to Google?
Thanks. Just uninstalled. What a cunts
Do we have any proof of it doing anything bad?
Taking Google’s description of what it is it seems like a good thing. Of course we should absolutely assume Google is lying and it actually does something nefarious, but we should get some proof before picking up the pitchforks.
Google is always 100% lying.
There are too many instances to list and I’m not spending 5 hours collecting examples for you.
They removed don’t be evil long time ago