• 0 Posts
  • 1 Comment
Joined 1 year ago
cake
Cake day: October 8th, 2024

help-circle
  • Is there any clarity about what the future with chat control will look like? As in what exactly apps will need to implement.

    This part about self evaluation confuses me:

    Under the new rules, online service providers will be required to assess how their platforms could be misused and, based on the results, may need to “implement mitigating measures to counter that risk,” the Council notes.

    I assume all chat apps would have to take measures, since generic data can be sent through them, including CSAM. Or could this quote be interpreted otherwise? I wonder what exactly is meant by voluntary then.

    Does this “mitigating measure” in practice mean sending a hash of each image sent through the messenger to some service built by Google or Apple for comparison against known CSAM? Since building a database of hashes to compare with is only realistically possible for the largest corporations. Or would the actual image itself have to leave the device, since it could be argued that some remote AI could identify any CSAM, even if it is not yet in any database? Perhaps some locally running AI model could do a decent enough job, so that nothing has to leave the device during the evaluation stage.

    But then again, there will always be false positives, where an innocent person’s image would be uploaded to… the service provider (like Signal) for review? So you could never be sure that your communication stays private, since the risk of false positives is always there. Regardless of what the solution is, the user will have to give up fully owning there device, since this chat control service can always decide to take control of your device to upload your communication somewhere.