17
submitted 6 days ago* (last edited 6 days ago) by vermaterc@lemmy.ml to c/fediverse@lemmy.ml

I'm preparing a presentation on how to implement an automated moderation of content on social media. I wanted to talk a bit on how this is done by small forums and Fediverse instances came as an obvious focus of study for me. Is it all done by hand by human moderators, or are there any tools that can filter out the obvious violations of instance's rules? I'm thinking mostly about images, are those filtered for nudity/violence?

you are viewing a single comment's thread
view the rest of the comments
[-] Lemvi@lemmy.sdf.org 3 points 5 days ago

Considering how often I see nsfw or nsfl stuff because it wasn't tagged... no I don't think so, at least there doesn't seem to be anything widely adopted on Lemmy.

For images at least it should be possible to create a bot that downloads any uploaded image, runs it through an API like Sightengine, and then automatically removes posts or bans users. Each instance would then of course decide whether to set up such a bot, so there is not going to be any fediverse-wide automod.

this post was submitted on 03 Aug 2025
17 points (100.0% liked)

Fediverse

21089 readers
562 users here now

A community dedicated to fediverse news and discussion.

Fediverse is a portmanteau of "federation" and "universe".

Getting started on Fediverse;

founded 5 years ago
MODERATORS