377
Australia fines Musk's X platform $386,000 over anti-child abuse gaps
(www.reuters.com)
This is a most excellent place for technology news and articles.
Since there is no hierarchical top general moderator/admin and every instance is under supervision by the respective owners of these instances, responsibility of safety is technically forwarded to individual instance admins as far as their instance goes. Or that's what I make of it at least, anyone feel free to correct me if I'm wrong. Also, the above conclusion does not include any possible random future law made up to state differently (decision-making entities have weird unpredictable logics... ๐ )
As far as for Mastodon itself, it could use some upgrades in its user management and reporting features, though (an option to automate instant reactions (like tempban until reviewed) on certain categories of reports (like child abuse and extreme/shocking violence) to prevent anyone reported for those kinds of things actively being able to continue until an admin sees and processes the report and reports are definitely not visible enough yet).
And things like automatic detection and direct message surveillance like these regulators are asking for?
Well, if those become necessary I'll just have to add Mastodon, along with anything known too well, in the bin for government-ruined software and start using hidden services... I will never willfully comply to spyware, not even (read: especially not) government-approved ones.
I have no idea if Mastodon has any plans adding those to the instance software though... Probably will if they get lawfully obligated I suppose, but I still sincerely hope not (as I still also sincerely hope this proposal gets dismissed for the obvious contradicting privacy laws it breaks and the vulnerabilities of backdoors).