• A Wild Mimic appears!@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      6
      ·
      15 hours ago

      Max Schrems and his team have done a lot of good regarding user rights in the face of giants like meta. I’m pretty sure that at least a handful of Meta employees regulary have nightmares because of NOYB.

    • Holli25@slrpnk.net
      link
      fedilink
      English
      arrow-up
      17
      ·
      1 day ago

      They are responsible for getting both “data protection adequacy agreements” for the US thrown out in court (see Max Schrems).

  • TryingSomethingNew@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    1 day ago

    Unsure what NOYB is, even after skimming this, but an interesting bit in there about how people wouldn’t have the right to be forgotten once the AI has been trained.

    • sznowicki@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 day ago

      I think there’s some „reasonable” keyword in the right to be forgotten. Like first if you have some old backups on tapes and you must keep them for whatever reason still for few years m, you can deny altering them if it the cost would be exorbitant and you ensure the users won’t come back after a recovery from said backup.

      Also they might train their models on pseudo-anonymized dataset so as long it’s too expensive to deanonymize the user data it could be fine in terms of GDPR.

      For example: you generate car trips stats per city in a country, per day. You could argue that you don’t need to delete user data that is part of this set if you ensure there are always enough of trips recorded (so can’t deanonymise someone from a single entry) and also it would falsify your historical stats.

      At my company who likes to be super compliant we do remove people from this kind of stats using some pseudo-anonymous references. So if you remove your account, there’s an event that changes the historical analytics data and removes all traces of your activity. But that’s because we can and want to be cool (company culture principles).

      Other data we have (website analytics) are impossible to go into this process as we ensure we never know WHO did something. We only know what and when.

      • RvTV95XBeo@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        CA has some strong privacy protections and a good chunk of the country’s population. IANAL but if I were to hope for a similar lawsuit it would come from CA state court.

  • toastmeister@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    42
    ·
    edit-2
    24 hours ago

    Europe’s gonna cut itself off from AI and miss this tech boom. At least they still have internal combustion cars, until China eats their lunch.

    They break their own procurement laws to pick MSFT as well, they dont even abide by their own bureaucracy.

    • TheBeege@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      17 hours ago

      Not all AI is equal. Europe does embrace certain types of AI depending on their production and usage. I work at a company pushing our AI throughout Europe, and the reception is generally very positive.

      These LLMs are just shit built in shitty ways. Their problem definition is shit, and the marketing of what they can do effectively is bullshit. There are some LLM efforts that are less shitty, but they’re not very popular yet

    • MonkderVierte@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      12 hours ago

      How much months was this again, since they would have needed multiple times the internet in data amounts, just to have some progress? Sorry, but it’s a bubble.

      Edit: AI being LLM, like @TheBeege said.

    • vivendi@programming.dev
      link
      fedilink
      English
      arrow-up
      20
      ·
      23 hours ago

      Meta’s recent LLAMA models are a disaster and worse they only masquerade as open models. Meanwhile Europe has it’s own AI research centers like Mistral who make really good models under the Apache 2 license.

    • Squizzy@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      23 hours ago

      Surely there is a middle ground from bending over for technogarchy and not having as wealthy an economy?