Per one tech forum this week: “Google has quietly installed an app on all Android devices called ‘Android System SafetyCore’. It claims to be a ‘security’ application, but whilst running in the background, it collects call logs, contacts, location, your microphone, and much more making this application ‘spyware’ and a HUGE privacy concern. It is strongly advised to uninstall this program if you can. To do this, navigate to 'Settings’ > 'Apps’, then delete the application.”

    • moncharleskey@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      I struggle with GitHub sometimes. It says to download the apk but I don’t see it in the file list. Anyone care to point me in the right direction?

    • ad_on_is@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 months ago

      if there was something that could run android apps virtualized, I’d switch in a heartbeat

    • ilinamorato@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 months ago

      The Firefox Phone should’ve been a real contender. I just want a browser in my pocket that takes good pictures and plays podcasts.

      • StefanT@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        Unfortunately Mozilla is going the enshittification route more and more. Or good in this case that the Firefox Phone did not take of.

    • kattfisk@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      To quote the most salient post

      The app doesn’t provide client-side scanning used to report things to Google or anyone else. It provides on-device machine learning models usable by applications to classify content as being spam, scams, malware, etc. This allows apps to check content locally without sharing it with a service and mark it with warnings for users.

      Which is a sorely needed feature to tackle problems like SMS scams

      • throwback3090@lemmy.nz
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        3 months ago

        Why do you need machine learning for detecting scams?

        Is someone in 2025 trying to help you out of the goodness of their heart? No. Move on.

      • desktop_user@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        3 months ago

        if the cellular carriers were forced to verify that caller-ID (or SMS equivalent) was accurate SMS scams would disappear (or at least be weaker). Google shouldn’t have to do the job of the carriers, and if they wanted to implement this anyway they should let the user choose what service they want to perform the task similar to how they let the user choose which “Android system WebView” should be used.

        • kattfisk@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          No, that wouldn’t make much difference. I don’t think I’ve seen a real world attack via SMS that even bothered to “forge” the from-field. People are used to getting texts from unknown numbers.

          And how would you possibly implement this supposed “caller-id” for a field that doesn’t even have to be set to a number?

          • desktop_user@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            3 months ago

            caller id is the thing that tells you the number. it isn’t cheap to forge, but it’s the only way a scan could reasonably effect anyone with more than half a brain. there is never a reason to send information to an unknown SMS number, or click on a link from a text message from an unknown number.

      • teohhanhui@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        Please, read the links. They are the security and privacy experts when it comes to Android. That’s their explanation of what this Android System SafetyCore actually is.

  • SavageCoconut@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 months ago

    Google says that SafetyCore “provides on-device infrastructure for securely and privately performing classification to help users detect unwanted content. Users control SafetyCore, and SafetyCore only classifies specific content when an app requests it through an optionally enabled feature.”

    GrapheneOS — an Android security developer — provides some comfort, that SafetyCore “doesn’t provide client-side scanning used to report things to Google or anyone else. It provides on-device machine learning models usable by applications to classify content as being spam, scams, malware, etc. This allows apps to check content locally without sharing it with a service and mark it with warnings for users.”

    But GrapheneOS also points out that “it’s unfortunate that it’s not open source and released as part of the Android Open Source Project and the models also aren’t open let alone open source… We’d have no problem with having local neural network features for users, but they’d have to be open source.” Which gets to transparency again.

  • AWittyUsername@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    Google says that SafetyCore “provides on-device infrastructure for securely and privately performing classification to help users detect unwanted content

    Cheers Google but I’m a capable adult, and able to do this myself.

  • Armand1@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    3 months ago

    For people who have not read the article:

    Forbes states that there is no indication that this app can or will “phone home”.

    Its stated use is for other apps to scan an image they have access to find out what kind of thing it is (known as "classification"). For example, to find out if the picture you’ve been sent is a dick-pick so the app can blur it.

    My understanding is that, if this is implemented correctly (a big ‘if’) this can be completely safe.

    Apps requesting classification could be limited to only classifying files that they already have access to. Remember that android has a concept of “scoped storage” nowadays that let you restrict folder access. If this is the case, well it’s no less safe than not having SafetyCore at all. It just saves you space as companies like Signal, WhatsApp etc. no longer need to train and ship their own machine learning models inside their apps, as it becomes a common library / API any app can use.

    It could, of course, if implemented incorrectly, allow apps to snoop without asking for file access. I don’t know enough to say.

    Besides, you think that Google isn’t already scanning for things like CSAM? It’s been confirmed to be done on platforms like Google Photos well before SafetyCore was introduced, though I’ve not seen anything about it being done on devices yet (correct me if I’m wrong).

    • lepinkainen@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 months ago

      This is EXACTLY what Apple tried to do with their on-device CSAM detection, it had a ridiculous amount of safeties to protect people’s privacy and still it got shouted down

      I’m interested in seeing what happens when Holy Google, for which most nerds have a blind spot, does the exact same thing

      EDIT: from looking at the downvotes, it really seems that Google can do no wrong 😆 And Apple is always the bad guy in lemmy

      • Natanael@infosec.pub
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        3 months ago

        Apple had it report suspected matches, rather than warning locally

        It got canceled because the fuzzy hashing algorithms turned out to be so insecure it’s unfixable (easy to plant false positives)

        • lepinkainen@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          They were not “suspected” they had to be matches to actual CSAM.

          And after that a reduced quality copy was shown to an actual human, not an AI like in Googles case.

          So the false positive would slightly inconvenience a human checker for 15 seconds, not get you Swatted or your account closed

          • Natanael@infosec.pub
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            3 months ago

            Yeah so here’s the next problem - downscaling attacks exists against those algorithms too.

            https://scaling-attacks.net/

            Also, even if those attacks were prevented they’re still going to look through basically your whole album if you trigger the alert

            • lepinkainen@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              3 months ago

              And you’ll again inconvenience a human slightly as they look at a pixelated copy of a picture of a cat or some noise.

              No cops are called, no accounts closed

              • Natanael@infosec.pub
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 months ago

                The scaling attack specifically can make a photo sent to you look innocent to you and malicious to the reviewer, see the link above

      • Noxy@pawb.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        it had a ridiculous amount of safeties to protect people’s privacy

        The hell it did, that shit was gonna snitch on its users to law enforcement.

        • lepinkainen@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 months ago

          Nope.

          A human checker would get a reduced quality copy after multiple CSAM matches. No police was to be called if the human checker didn’t verify a positive match

          Your idea of flooding someone with fake matches that are actually cat pics wouldn’t have worked

    • Opinionhaver@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      3 months ago

      Doing the scanning on-device doesn’t mean that the findings cannot be reported further. I don’t want others going thru my private stuff without asking - not even machine learning.

  • Ilovethebomb@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    I’ve just given it the boot from my phone.

    It doesn’t appear to have been doing anything yet, but whatever.

    • hector@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      Thanks for the link, this is impressive because this really has all the trait of spyware; apparently it installs without asking for permission ?

      • Moose@moose.best
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        Yup, heard about it a week or two ago. Found it installed on my Samsung phone, it never asked for permissions or gave any info that it was added to my phone.

  • mctoasterson@reddthat.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    People don’t seem to understand the risks presented by normalizing client-side scanning on closed source devices. Think about how image recognition works. It scans image content locally and matches to keywords or tags, describing the person, objects, emotions, and other characteristics. Even the rudimentary open-source model on an immich deployment on a Raspberry Pi can process thousands of images and make all the contents searchable with alarming speed and accuracy.

    So once similar image analysis is done on a phone locally, and pre-encryption, it is trivial for Apple or Google to use that for whatever purposes their use terms allow. Forget the iCloud encryption backdoor. The big tech players can already scan content on your device pre-encryption.

    And just because someone does a traffic analysis of the process itself (safety core or mediaanalysisd or whatever) and shows it doesn’t directly phone home, doesn’t mean it is safe. The entire OS is closed source, and it needs only to backchannel small amounts of data in order to fuck you over.

    Remember the original justification for clientside scanning from Apple was “detecting CSAM”. Well they backed away from that line of thinking but they kept all the client side scanning in iOS and Mac OS. It would be trivial for them to flag many other types of content and furnish that data to governments or third parties.

  • Event_Horizon@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    It didn’t appear in my apps list so I thought it wasn’t installed. But when I searched for the app name it appears. So be aware.

  • shortwavesurfer@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Not on mine, it doesn’t. I don’t use the Play Store. I don’t have Google Play Services. And I don’t have Google Apps installed. And I’m running Lineage OS. So, fuck you Google.

      • solsangraal@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        “i just needed to pop in here and mention that the terrible/wrong/evil thing in the post doesn’t affect me at all, like it does for you suckers ROFLMFAO…but also: LOL”

  • CaptKoala@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Thanks for bringing this up, first I’ve heard of it. Not present on my GrapheneOS pixel, present on stock.

    I suppose I should encourage pixel owners to switch from stock to graphene, I know which decide I rather spend time using. GrapheneOS one of course.

    • Flying_Hellfish@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      I’ve looked into it.l briefly. Did you have any issues switching? I’m concerned about how some apps I need would function.

      • praechaox@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        I switched from a Samsung to a Pixel a couple years ago. I instantly installed GrapheneOS and have loved it ever since. It generally works perfectly normally with the huge background benefit of security and privacy. The only issues I have had is one of my banking apps doesn’t work (but the others work fine) and lack of RCS (but I’m sure it’s coming). In short, highly highly recommend. I will be sticking with GOS for the long term!

      • CaptKoala@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        I did a fair amount of research before the switch to find alternatives to Google services, some I’ve replaced, others I felt were too much of a hassle for my phone usage.

        I’ve kept my original pixel stock, the hardest part about switching this one over was plugging it in and following the instructions.

        I’m hoping to get rid of my stock OS pixel soon, it would appear my bank hasn’t blocked it’s app on Graphene, unlike Uber.

        For the rest I’ll either buy a cheap af shitbox to use purely for banking and Uber (if it comes to that).

        If you’ve any other questions I’m happy to help find then answers with you, feel free to DM me.

    • SayNaughtOfIt@feddit.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      I’ve got a Pixel 8 Pro and I’m currently using the stock OS. Anything in particular that you miss with Graphene OS?

      • CaptKoala@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        I still use a stock pixel for work related and daily usage, but the alternatives I’ve found between F-Droid and Aurora store I’ve never felt lacking.

        Maybe I’ll finish the switch fully in the coming months.

  • DigitalDilemma@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    3 months ago

    More information: It’s been rolling out to Android 9+ users since November 2024 as a high priority update. Some users are reporting it installs when on battery and off wifi, unlike most apps.

    App description on Play store: SafetyCore is a Google system service for Android 9+ devices. It provides the underlying technology for features like the upcoming Sensitive Content Warnings feature in Google Messages that helps users protect themselves when receiving potentially unwanted content. While SafetyCore started rolling out last year, the Sensitive Content Warnings feature in Google Messages is a separate, optional feature and will begin its gradual rollout in 2025. The processing for the Sensitive Content Warnings feature is done on-device and all of the images or specific results and warnings are private to the user.

    Description by google Sensitive Content Warnings is an optional feature that blurs images that may contain nudity before viewing, and then prompts with a “speed bump” that contains help-finding resources and options, including to view the content. When the feature is enabled, and an image that may contain nudity is about to be sent or forwarded, it also provides a speed bump to remind users of the risks of sending nude imagery and preventing accidental shares. - https://9to5google.com/android-safetycore-app-what-is-it/

    So looks like something that sends pictures from your messages (at least initially) to Google for an AI to check whether they’re “sensitive”. The app is 44mb, so too small to contain a useful ai and I don’t think this could happen on-phone, so it must require sending your on-phone data to Google?

    • dev_null@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      Do we have any proof of it doing anything bad?

      Taking Google’s description of what it is it seems like a good thing. Of course we should absolutely assume Google is lying and it actually does something nefarious, but we should get some proof before picking up the pitchforks.

      • Fair Fairy@thelemmy.club
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        4
        ·
        edit-2
        3 months ago

        Google is always 100% lying.
        There are too many instances to list and I’m not spending 5 hours collecting examples for you.
        They removed don’t be evil long time ago