• phutatorius@lemmy.zip
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      21 hours ago

      YouTube has been getting much worse lately as well. Lots of purported late-breaking Ukraine war news that’s nothing but badly-written lies. Same with reports of Trump legal defeats that haven’t actually happened. They are flooding the zone with shit, and poisoning search results with slop.

      • BarneyPiccolo@lemmy.today
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 hours ago

        The entire media universe is being captured by Sociopathic Oligarchs, and they intend to extend the Conservative Propaganda Machine to cover everything. They will NOT be amenable to efforts toward monitoring truth in media, unless they can be the sole determiners of what is the truth.

    • Disagree. Without Section 230 (or equivalent laws of their respective jurisdictions) your Fediverse instance would be forced to moderate even harder in fear of legal action. I mean, who even decides what “AI deception” is? your average lemmy.world mod, an unpaid volunteer?

      It’s a threat to free speech.

      • 9488fcea02a9@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        4
        ·
        1 day ago

        Also, it would be trivial for big tech to flood every fediverse instance with deceptive content and get us all shut down

      • Lumisal@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        3
        ·
        20 hours ago

        Just make the law so it only affects things with x-amount of millions of users or x-percent of the population number minimum. You could even have regulation tiers toed to amount of active users, so those over the billion mark are regulated the strictest, like Facebook.

        That’ll leave smaller networks, forums, and businesses alone while finally giving some actually needed regulations to the large corporations messing with things.

        • GamingChairModel@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 hours ago

          I don’t think it’d be that simple.

          Any given website URL could go viral at any moment. In the old days, that might look like a DDoS that brings down the site (aka the slashdot effect or hug of death), but these days many small sites are hosted on infrastructure that is protected against unexpectedly high traffic.

          So if someone hosts deceptive content on their server and it can be viewed by billions, there would be a disconnect between a website’s reach and its accountability (to paraphrase Spider-Man’s Uncle Ben).

          • Lumisal@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 hours ago

            I agree it’s not that simple, but it’s just a proposed possible beginning to a solution. We could refine it further and then give the vet refined idea as a charter for a lawyer to them draft up as a proper proposal that could then be present to a relative governmental body to consider.

            But few people like to put in that work. Even politicians don’t - that’s why corporations get so much of what they want - they do that and pay people to do that for them.

            That said, view count isn’t the same as membership. This solution wouldn’t be perfect.

            But it would be better than nothing at all, especially now with the advent of AI turning the firehouse of lies into the tsunami of lies. Currently one side only grows stronger in their opportunity for causing havoc and mischief while the other, quite literally, does nothing and sometimes advocates for doing nothing. You could say it’s a reflection of the tolerance paradox that we’re seeing today.

          • Tad Lispy@europe.pub
            link
            fedilink
            English
            arrow-up
            3
            ·
            15 hours ago

            Proton is not a social medium. As to “how high”, the lawmakers have to decide on that, hopefully after some research and public consultations. It’s not an unprecedented problem.

            Another criterion might be revenue. If a company monetises users attention and makes above certain amount, put extra moderation requirements on them.

          • Dozzi92@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            15 hours ago

            Yeah, I work for your biggest social media comoetitor, why would I not just go post slop all over your platform with the intent of getting you fined?

          • Lumisal@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            15 hours ago

            Proton isn’t social media.

            If you can’t understand why big = bad in terms of the dissemination of misinformation, then clearly we’re already at an impass on further discussion of possible numbers and usage of statistics and other variables in determining potential regulations.

    • ImmersiveMatthew@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      3
      ·
      1 day ago

      I think just the people need to held accountable as while I am no fan of Meta, it is not their responsibility to hold people legally accountable to what they choose to post. What we really need is zero knowledge proof tech to identity a person is real without having to share their personal information but that breaks Meta’s and other free business model so here we are.

    • Rhoeri@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      5
      ·
      1 day ago

      Sites AND the people that post them. The age of consequence-less action needs to end.