Disagree. Without Section 230 (or equivalent laws of their respective jurisdictions) your Fediverse instance would be forced to moderate even harder in fear of legal action. I mean, who even decides what “AI deception” is? your average lemmy.world mod, an unpaid volunteer?
Just make the law so it only affects things with x-amount of millions of users or x-percent of the population number minimum. You could even have regulation tiers toed to amount of active users, so those over the billion mark are regulated the strictest, like Facebook.
That’ll leave smaller networks, forums, and businesses alone while finally giving some actually needed regulations to the large corporations messing with things.
Any given website URL could go viral at any moment. In the old days, that might look like a DDoS that brings down the site (aka the slashdot effect or hug of death), but these days many small sites are hosted on infrastructure that is protected against unexpectedly high traffic.
So if someone hosts deceptive content on their server and it can be viewed by billions, there would be a disconnect between a website’s reach and its accountability (to paraphrase Spider-Man’s Uncle Ben).
I agree it’s not that simple, but it’s just a proposed possible beginning to a solution. We could refine it further and then give the vet refined idea as a charter for a lawyer to them draft up as a proper proposal that could then be present to a relative governmental body to consider.
But few people like to put in that work. Even politicians don’t - that’s why corporations get so much of what they want - they do that and pay people to do that for them.
That said, view count isn’t the same as membership. This solution wouldn’t be perfect.
But it would be better than nothing at all, especially now with the advent of AI turning the firehouse of lies into the tsunami of lies. Currently one side only grows stronger in their opportunity for causing havoc and mischief while the other, quite literally, does nothing and sometimes advocates for doing nothing. You could say it’s a reflection of the tolerance paradox that we’re seeing today.
Do we fine Proton AG for a bunch of shitheads abusing their platform and sending malicious email? How do they detect it if its encrypted? Force them to backdoor the encryption?
Proton is not a social medium. As to “how high”, the lawmakers have to decide on that, hopefully after some research and public consultations. It’s not an unprecedented problem.
Another criterion might be revenue. If a company monetises users attention and makes above certain amount, put extra moderation requirements on them.
If you can’t understand why big = bad in terms of the dissemination of misinformation, then clearly we’re already at an impass on further discussion of possible numbers and usage of statistics and other variables in determining potential regulations.
Disagree. Without Section 230 (or equivalent laws of their respective jurisdictions) your Fediverse instance would be forced to moderate even harder in fear of legal action. I mean, who even decides what “AI deception” is? your average lemmy.world mod, an unpaid volunteer?
It’s a threat to free speech.
Also, it would be trivial for big tech to flood every fediverse instance with deceptive content and get us all shut down
Just make the law so it only affects things with x-amount of millions of users or x-percent of the population number minimum. You could even have regulation tiers toed to amount of active users, so those over the billion mark are regulated the strictest, like Facebook.
That’ll leave smaller networks, forums, and businesses alone while finally giving some actually needed regulations to the large corporations messing with things.
I don’t think it’d be that simple.
Any given website URL could go viral at any moment. In the old days, that might look like a DDoS that brings down the site (aka the slashdot effect or hug of death), but these days many small sites are hosted on infrastructure that is protected against unexpectedly high traffic.
So if someone hosts deceptive content on their server and it can be viewed by billions, there would be a disconnect between a website’s reach and its accountability (to paraphrase Spider-Man’s Uncle Ben).
I agree it’s not that simple, but it’s just a proposed possible beginning to a solution. We could refine it further and then give the vet refined idea as a charter for a lawyer to them draft up as a proper proposal that could then be present to a relative governmental body to consider.
But few people like to put in that work. Even politicians don’t - that’s why corporations get so much of what they want - they do that and pay people to do that for them.
That said, view count isn’t the same as membership. This solution wouldn’t be perfect.
But it would be better than nothing at all, especially now with the advent of AI turning the firehouse of lies into the tsunami of lies. Currently one side only grows stronger in their opportunity for causing havoc and mischief while the other, quite literally, does nothing and sometimes advocates for doing nothing. You could say it’s a reflection of the tolerance paradox that we’re seeing today.
How high is your proposed number?
Why is Big = Bad?
Proton have over 100 million users.
Do we fine Proton AG for a bunch of shitheads abusing their platform and sending malicious email? How do they detect it if its encrypted? Force them to backdoor the encryption?
Proton is not a social medium. As to “how high”, the lawmakers have to decide on that, hopefully after some research and public consultations. It’s not an unprecedented problem.
Another criterion might be revenue. If a company monetises users attention and makes above certain amount, put extra moderation requirements on them.
Yeah, I work for your biggest social media comoetitor, why would I not just go post slop all over your platform with the intent of getting you fined?
Proton isn’t social media.
If you can’t understand why big = bad in terms of the dissemination of misinformation, then clearly we’re already at an impass on further discussion of possible numbers and usage of statistics and other variables in determining potential regulations.