Image is of a destroyed American AWACS plane in Saudi Arabia, of which there is a very limited supply and each of which is enormously expensive both monetarily and in terms of components. Iran hit this with a precision drone strike that likely cost ~$20,000.
I don't have much to add from the last megathread description. This isn't to say that nothing has happened or has changed since then - decades are still happening in weeks - but the general flow of the war is remaining the same. Trump sometimes threatens to open the Strait with troops and flatten Iran to rubble, and other times threatens that he's gonna back off and let other countries handle it if they really want little trifles like "fuel" and "energy" so much. Iran continues to strike across the Middle East. The West continues to bomb civilian infrastructure due to their relative inability to affect the missile cities. In all: things are generally getting worse for America and the Zionists.
April is the month where the last ships that left Hormuz before it was closed will arrive around the world, so the last month of economic turmoil has been a mere prelude to what's going to occur in the near-future. The silver lining is that Iran appears to be formalizing the new state of affairs in Hormuz, creating a rial-based toll to allow passage between a pair of Iranian-controlled islands where they can be monitored, meaning that, as long as the US doesn't do something exceptionally stupid, the global energy crisis may "only" last a couple years instead of simply being the new reality from now on. Some countries have already agreed to this arrangement, and others will inevitably follow despite their consternation as their economies increasingly suffer.
Last week's thread is here.
The Imperialism Reading Group is here.
Please check out the RedAtlas!
The bulletins site is here. Currently not used.
The RSS feed is here. Also currently not used.
The Zionist Entity's Genocide of Palestine
Sources on the fighting in Palestine against the temporary Zionist entity. In general, CW for footage of battles, explosions, dead people, and so on:
UNRWA reports on the Zionists' destruction and siege of Gaza and the West Bank.
English-language Palestinian Marxist-Leninist twitter account. Alt here.
English-language twitter account that collates news.
Arab-language twitter account with videos and images of fighting.
English-language (with some Arab retweets) Twitter account based in Lebanon. - Telegram is @IbnRiad.
English-language Palestinian Twitter account which reports on news from the Resistance Axis. - Telegram is @EyesOnSouth.
English-language Twitter account in the same group as the previous two. - Telegram here.
Mirrors of Telegram channels that have been erased by Zionist censorship.
Russia-Ukraine Conflict
Examples of Ukrainian Nazis and fascists
Examples of racism/euro-centrism during the Russia-Ukraine conflict
Sources:
Defense Politics Asia's youtube channel and their map. Their youtube channel has substantially diminished in quality but the map is still useful.
Moon of Alabama, which tends to have interesting analysis. Avoid the comment section.
Understanding War and the Saker: reactionary sources that have occasional insights on the war.
Alexander Mercouris, who does daily videos on the conflict. While he is a reactionary and surrounds himself with likeminded people, his daily update videos are relatively brainworm-free and good if you don't want to follow Russian telegram channels to get news. He also co-hosts The Duran, which is more explicitly conservative, racist, sexist, transphobic, anti-communist, etc when guests are invited on, but is just about tolerable when it's just the two of them if you want a little more analysis.
Simplicius, who publishes on Substack. Like others, his political analysis should be soundly ignored, but his knowledge of weaponry and military strategy is generally quite good.
On the ground: Patrick Lancaster, an independent and very good journalist reporting in the warzone on the separatists' side.
Unedited videos of Russian/Ukrainian press conferences and speeches.
Pro-Russian Telegram Channels:
Again, CW for anti-LGBT and racist, sexist, etc speech, as well as combat footage.
https://t.me/aleksandr_skif ~ DPR's former Defense Minister and Colonel in the DPR's forces. Russian language.
https://t.me/Slavyangrad ~ A few different pro-Russian people gather frequent content for this channel (~100 posts per day), some socialist, but all socially reactionary. If you can only tolerate using one Russian telegram channel, I would recommend this one.
https://t.me/s/levigodman ~ Does daily update posts.
https://t.me/patricklancasternewstoday ~ Patrick Lancaster's telegram channel.
https://t.me/gonzowarr ~ A big Russian commentator.
https://t.me/rybar ~ One of, if not the, biggest Russian telegram channels focussing on the war out there. Actually quite balanced, maybe even pessimistic about Russia. Produces interesting and useful maps.
https://t.me/epoddubny ~ Russian language.
https://t.me/boris_rozhin ~ Russian language.
https://t.me/mod_russia_en ~ Russian Ministry of Defense. Does daily, if rather bland updates on the number of Ukrainians killed, etc. The figures appear to be approximately accurate; if you want, reduce all numbers by 25% as a 'propaganda tax', if you don't believe them. Does not cover everything, for obvious reasons, and virtually never details Russian losses.
https://t.me/UkraineHumanRightsAbuses ~ Pro-Russian, documents abuses that Ukraine commits.
Pro-Ukraine Telegram Channels:
Almost every Western media outlet.
https://discord.gg/projectowl ~ Pro-Ukrainian OSINT Discord.
https://t.me/ice_inii ~ Alleged Ukrainian account with a rather cynical take on the entire thing.
AI’s fluency in other languages hides a Western worldview that can mislead users − a scholar of Indonesian society explains
remainder
Fluency is not the same as understanding
I have studied Indonesian society, media and culture for more than 30 years. That gives me a particular vantage point on a problem that reaches well beyond Indonesia: large language models – LLMs – like ChatGPT, Claude and Gemini can now speak dozens of languages with remarkable fluency. That fluency creates the impression that AI understands local cultures.
Producing grammatically correct Indonesian, Arabic, Swahili or Hindi, however, does not change the underlying worldview through which these systems reason. It does not alter how they think about people, relationships, responsibility or what counts as a good outcome.
Those assumptions are shaped by training data drawn predominantly from English-language sources based in the United States. Meta’s open-weight model LLaMA 2 was trained on approximately 89.7% English-language text; LLaMA 3 includes only about 5% non-English data. Major commercial models don’t publish equivalent breakdowns but draw heavily on the same sources. Arabic, the fifth-most-spoken language globally, accounts for under 1% of content in large training datasets. Languages with tens of millions of speakers, including Bengali and Hausa, barely appear.
Beneath the surface of these multilingual conversations, English functions as a hidden intermediary. A study by researchers at the University of Oxford found that LLMs routinely conduct their core reasoning in English, even when prompted in other languages. They translate the output at the final stage. A user receives flawless text in their preferred language, but the underlying logic originates elsewhere.
What the data shows
To examine how this plays out in practice, I ran experiments with ChatGPT, Claude and Gemini. I asked questions in both English and Indonesian about concepts such as education, responsibility, well-being and several Indonesian terms that resist direct translation into English. These included terms such as “gotong royong,” which describes a tradition of communal mutual assistance.
Then I asked questions about education in both languages, using the word “pendidikan” in Indonesian. The answers were consistently centered on individual development, personal autonomy, critical thinking and preparation for the labor market.
What largely disappeared were the dimensions of pendidikan that Indonesian educational traditions have historically emphasized. In Indonesia education has long been focused on ethical discipline. Scholars of Indonesian education such as Christopher Bjork and Robert Hefner have documented how distinct these traditions are from models that treat education primarily as a path to individual advancement and career preparation, which is the lens through which the AI tools viewed education.
The Indonesian concept of “malu” offers a starker example. Often translated as “shame” or “embarrassment,” malu has been analyzed by anthropologists Clifford Geertz and Tom Boellstorff as something closer to a shared social awareness.
A person might feel malu when speaking out of turn in front of elders, or when a family member’s behavior reflects poorly on the household. It regulates conduct and signals awareness of one’s position within a web of relationships. It is cultivated, not merely felt. It is a form of relational awareness rather than a private psychological event.
When asked directly to define malu, the models acknowledged its social dimensions. In scenario-based questions that simply used the word without asking for a definition, however, all three fell back on the English translation of shame, consistently framing it as an individual emotional experience.
One representative response framed malu as a normal emotional reaction to be managed through self-reflection and confidence-building – a personal psychological problem rather than a social one. The relational dimensions of the concept disappeared entirely, replaced by the language of individual emotional regulation.
A distinctly American worldview travels inside the translation, largely unannounced.
Why this probably won’t change soon
Translation is far cheaper: Train one model on the vast English-language web, then use multilingual output capabilities to serve global markets. As media scholar Safiya Umoja Noble argues about algorithmic systems more broadly, what looks like a technical outcome is actually a structural one, shaped by who has the wealth and infrastructure to build these systems.
The embedded worldview isn’t a mistake; it’s what happens when knowledge production is profit-seeking.
The main exceptions are Chinese models such as DeepSeek and Alibaba’s Qwen. They represent a genuine alternative to the U.S.-dominated pipeline, though research shows they operate through a distinctly Chinese cultural lens. Asked about a workplace disagreement, for instance, they tend to advise silence or indirect phrasing to preserve harmony rather than the direct, private correction that Western models recommend.
Other regional efforts, such as SEA-LION for Southeast Asia and Kan-LLaMA for the Indian language Kannada, use U.S. models as their foundation. They add additional vocabulary and cultural information related to local languages. But the core logic remains tied to the original U.S. training.
Why this matters more than it might seem
One might reasonably ask whether this is simply a limitation users can work around. Decades of media scholarship demonstrate how audiences interpret foreign media through their own cultural frameworks.
For example, anthropologist Brian Larkin documented how viewers in northern Nigeria rework the narratives of Bollywood films to align with local Islamic values. Larkin found that Muslim viewers in Kano reinterpreted Bollywood films through an Islamic moral lens, reading their narratives as reinforcing local values of propriety and ethical conduct. That dynamic depends on encountering media as something with a visible origin. But to do that, you need to know where your media is coming from.
Conversational AI is different. Research at Harvard Business School finds that people increasingly use AI systems for emotional support, advice and companionship. When a culturally specific worldview is delivered through a relationship that feels attentive and empathetic, in your own language, it arrives less as a claim to be evaluated and more as a shared premise within a dialogue. It becomes difficult to notice, and harder to contest.
The concern is that these perspectives become the new normal. Certain ways of reasoning about family life, education and responsibility may come to feel natural and self-evident. Linguistic diversity among AI systems is real and growing. Cultural worldview diversity, however, has not kept pace.
Epistemicide - whether intentional done by specific actors or through the logics of Capital, has been a pivotal part of Western culture. Which is why Malaysia had invested in developing a fully indigenous LLM.
You could argue this is cultural genocide through covert means. Given that they came up with that term to describe China you have to wonder what conversations about cultural genocide they were already having about what they wanted to do to others and how it could be achieved.
Every society as broken and terrible as America. A cottage industry of parental and child experts simultaneously fighting over how to estrange and recover your familial relationship.
Thanks for sharing the article, very interesting read. It also reinforces my point of view is that there is literally no reason to use an LLM if what you want to know is something personal, rather than just a cursory glance at some specific piece of information. In fact, I don't even understand how people can be comfortable asking chatbots for personal advice and guidance.
What I mean is I'll sometimes ask Claude or Deepseek for help researching specific topics, and even then I don't just ask something like "what is the airspeed of an unladen swallow?". Instead I'll ask "give me some sources that discuss swallow airspeed in different contexts" and do the research myself in the provided links, because I don't trust the chatbot to give me accurate information.
What's fucked up is that I would normally use Google for this, except that now Google sucks ass. In other words, I'm using the tech industry's questionable solution to the problem that they created themselves.
I think you nailed it in terms of the practical applications of the “chatbots”. I’m of the opinion that it’s a smarter Wikipedia and Google, useful at the start and for certain menial tasks. Which is why I think in Asian contexts “AI” is seen more like any other technological advancement and not some messianic invention like it is so often portrayed by techno-financial Western Capital.
There are also obviously much more niche and specialized applications in research and industry, but those won’t be the ones that get covered in mainstream media.
I think like with a lot of things, it really depends on how you use it, and I personally will not be using it as some sort of therapist, but it will inevitably be used that way by some people.
These applications are where it will actually persist as the rest collapses. Hyper specific micro models that handle a singular task. Very useful in industrial applications and data pipelines/warehousing.
Vector databases are pretty neat too. You can generate indexes for data that embed the records in a relative state space meaning you can get really fast and accurate full text search, even for images.
These will likely keep getting simpler and simpler though, until we're just back to LZMA embeddings with image tags used to locate images.
As if the world hadn’t become Americanized enough
Good read
We need a MGS5 Skullface emote. :skullface: here lmao
Another sign AI is being trained with Reddit posts.