Also how do we even explain this to normal people who are not extremely online? How can we help neighbors or the elderly recognize when they are being nudged by an algorithm or seeing a digital caricature?
As an almost 'elder' myself (soon to reach 60) I can tell you age is not the issue. A lot of you, younger people, seem as blind to the threat(s) as we older people seem to be, maybe worse since you're living 24/24 through those tech... and are still willing to use them, even insisting on not using anything but those big tech, and certainly not considering using no tech at all. See how easy it is to conlucde things out of meaningless examples?
So, imvho, a good beginning step before trying to explain anything to anyone would be to not focus on any demographic as a 'weaker' one and instead try to consider why so many different types of persons fall to the same trap. Don't you think?
If it’s not age (or genre or race), what could explain so many of us fall for big tech? Hint: it probably has a lot to do with human psychology, more than age.
Has anyone here successfully implemented local first solutions that reduced their reliance on big tech ai?
I don't rely on AI, at all. Big tech or no. As far as I'm concerned, problem solved.
Edit: I should made it clear I'm half joking here: I really don't rely on AI, but I'm also aware my personal choice doesn't solve anything.
I am looking for ways to foster cognitive immunity
I don't know about immunity, but the way to avoid falling for most scam and lies... used to be common knowledge: cross-referencing one’s sources (of news, or whatever else), and never trust any single source.
It works against scam (don't blindly follow a link that tells you there is an issue with this or that, double-check whatever unexpected event they want you to react upon, using a different source). It works as well against most manipulations, lies and almost anything else.
How to use that against the social media's tsunami of fake news and emotional turds that users are faced with?
Well, my solution was to quit using those shit services. For anyone less radical than I, when faced with what apparently looks like many different 'sources' (those people we follow) retweeting the same emotional turd or the same lie, one solution is to learn to consider all those people we follow (including friends) as merely avatars of the same (and unique) algorithm that feeds them the same shit in order to make them react in a certain predictable way. The issue? It takes some efforts, sometimes a lot. And it is not ‘friendly’. Too bad, I don’t feel any desire to become friendly with an algorithm, even when that usurps the appearance of a friend.
If we want to protect ourselves and our local communities from being manipulated by these black box models how do we actually do it?
We teach people that communities used to exist offline and have been existing for many thousand of years without relying on any app, any AI, any algorithm, without any Like, Subscribe or Upvote, without any tech… beside the shared ability and desire to talk with one another. Without anything but the willingness to meet and to work together.
I know it sounds silly and quite impractical considering people can instantly chat across the entire planet but, and even more so in that post-democratic and post-freedom societies we Westerners now live in, offline communities should once again become a realistic option, if not our main focus. Big corps/govs can't as easily track us in the privacy of our homes as long as we... don't use tech to communicate between one another..
Encouraging people (of all ages) to even consider not using their stupid phone and some stupid app to share some stupid content (I may be slightly trolling here), is the real issue, here. People are lazy as fuck. We are. They want immediate gratification (validation). We all want. And it will take a lot of work and (re)education to change that.
edit: typos.