232
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 24 Jul 2023
232 points (100.0% liked)
Technology
37758 readers
332 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
I think you're missing the opposite point.
An AI trained on a given instance's admin decisions, would increase the same censorship the admins already apply. We can agree on that.
An AI trained by a third-party on unknown data (and actually illegal to be known) which can detect "CSAM (and potentially other content)", would increase censorship of both CSAM... and of "potentially other content" out of the control, preferences or knowledge of the instance admins.
Using an external service to submit ALL content for an AI trained by a third-party to make a decision, not only allows the external service to collect ALL the content (not just the censored one), but also to change the decision parameters without previous notice, or any kind of oversight, and apply it to ALL content.
The problem is a difference between:
One is an AI that can make mistakes, but mostly follows whatever an admin would do. The other, is a 100% surveillance state nightmare in the name of filtering 0.03% of content.