I always just turn DRM off and don't subscribe to these things... Now I know I had a better reason than I thought I did.
It's times like these when I'm glad I refused to even log into Google to view YouTube, let alone buy subscriptions. I also refuse to view downloading content (without logging in) that's freely available to be viewed as piracy.
These subscriptions are undoubtedly a rip-off. For those saying creators get a "cut", there's a reason why sites like Patreon exist. It's substantially better for creators if you subscribe to them directly that way and get your videos from Patreon. Same with Nebula etc. If I really had to pay then I'd do that (and do already for some stuff that was never on YouTube anyway).
You can get enough subs for the price of a YT premium to get plenty of content to watch, even if you don't want to subvert Google. So there's zero reason to throw money at them for this. None at all.
Signal. Also, the solution to the "no-one on signal" problem is simply to refuse to use insecure platforms like WhatsApp. If people want to talk to you then, they have to download signal. They might get annoyed with you, but sometimes a bit of coercion is necessary to get people to do what's good for them.
Some of us remember there was a time when things like Reddit didn't exist, and neither did Facebook, Twitter etc. Lots of people lived just fine without them then. It's completely possible to take a hard line on this stuff and just refuse to use sites/apps/products that don't respect your privacy. Remember, there's always a smaller, friendlier or mechanical version.
And this is why you shouldn't allow things essential to your life to be mediated by some faceless tech giant. Self-hosting may be more effort, but you can at least guarantee that any issues won't be as a result of some bureaucratic nonsense or administrative error. This is not just smart home stuff - there are similar examples affecting email, photo galleries, file storage, etc. etc.
Stressful training and work environment, long hours, and the pay isn't that great either. This really is the government's problem to solve - and it's probably not going to be solved just by paying people once to complete their degree, it will have to be throughout their career by providing more pay and more support. Which of course means the public will eventually end up footing at least some of the bill - but the alternative, where education is compromised, will end up costing even more.
Handing it to LibreOffice or Abiword I guess. Or for cloud fans, Google Docs. I don't think anyone is going to go without a word processor because of this.
Or you could make public transport run faster and more efficiently and faster, reducing the number of people who drive.
Brave Search is now frequently beating DDG and Startpage for accuracy of search results. It's like using Google 10 years ago when it was actually good but without the ads, tracking and pestering to "log in". Good stuff.
So instances that are actually supporting CSAM material can and should be dealt with by law enforcement. That much is simple (and I'm surprised it hasn't been done with certain ... instances, to be honest). But I think the apparently less clearly solved issues have known and working solutions that apply to other parts of the web as well. No content moderation is perfect, but in general, if admins are acting in good faith, I don't think there should be too much of a problem:
- For when federation inadvertently spreads some of the material through to other instances' databases: Isn't this the same situation as when ISP's used to cache web traffic to save on bandwidth costs? In that situation, too, browsed web pages would end up in the ISP's cache which could then harbour whatever material the user was looking at. As I recall, the ISP would just ban CSAM and other illegal material in their terms of service, and remove anyone reported as violating the rule, and that sufficed.
- As for "bad" instances/users: It's impossible to block all instances and all users that might disseminate this material as you'd have to go to a "block everything, then allow known entities" rule which would break the Fediverse model. Again, users or site admins found to be acting in bad faith should be blocked and reported (either automatically or manually). Some may slip through the net, but as long as admins are seen to be doing the best they can, that should be enough.
There seem to be concerns about "surveillance" of material on Mastodon, which strikes me as a bit odd. Mastodon isn't a private platform. People who want private messaging should use an E2EE messaging app like Signal, not a social networking platform like Mastodon (or Twitter, Threads etc.). Mastodon data is already public and is likely already being surveilled, and will be so regardless of what anyone involved with the network wants, because there's no access control on it anyway. Having Mastodon itself contain code to keep the network clean, even if it only applies to part of the network, just allows those Mastodon admins who are running that part of the code to take some of the responsibility on themselves for doing so, reducing the temptation for third parties to do it for them.
Expensive and impossible to customise effectively, making it much poorer value than Android. Not that Android is perfect. The instant some form of non-proprietary Linux (like Debian w/phosh, PostmarketOS, etc.) becomes viable as a daily driver, Android is out as well.
Same... Have done for ages now. Don't know how anyone puts up with the default behaviour.