198
submitted 3 weeks ago* (last edited 3 weeks ago) by SeventyTwoTrillion@hexbear.net to c/news@hexbear.net

A reminder that as the US continues to threaten countries around the world, fedposting is to be very much avoided (even with qualifiers like "in Minecraft") and comments containing it will be removed.

Image is of a destroyed American AWACS plane in Saudi Arabia, of which there is a very limited supply and each of which is enormously expensive both monetarily and in terms of components. Iran hit this with a precision drone strike that likely cost ~$20,000.


I don't have much to add from the last megathread description. This isn't to say that nothing has happened or has changed since then - decades are still happening in weeks - but the general flow of the war is remaining the same. Trump sometimes threatens to open the Strait with troops and flatten Iran to rubble, and other times threatens that he's gonna back off and let other countries handle it if they really want little trifles like "fuel" and "energy" so much. Iran continues to strike across the Middle East. The West continues to bomb civilian infrastructure due to their relative inability to affect the missile cities. In all: things are generally getting worse for America and the Zionists.

April is the month where the last ships that left Hormuz before it was closed will arrive around the world, so the last month of economic turmoil has been a mere prelude to what's going to occur in the near-future. The silver lining is that Iran appears to be formalizing the new state of affairs in Hormuz, creating a rial-based toll to allow passage between a pair of Iranian-controlled islands where they can be monitored, meaning that, as long as the US doesn't do something exceptionally stupid, the global energy crisis may "only" last a couple years instead of simply being the new reality from now on. Some countries have already agreed to this arrangement, and others will inevitably follow despite their consternation as their economies increasingly suffer.


Last week's thread is here.
The Imperialism Reading Group is here.

Please check out the RedAtlas!

The bulletins site is here. Currently not used.
The RSS feed is here. Also currently not used.

The Zionist Entity's Genocide of Palestine

If you have evidence of Zionist crimes and atrocities that you wish to preserve, there is a thread here in which to do so.

Sources on the fighting in Palestine against the temporary Zionist entity. In general, CW for footage of battles, explosions, dead people, and so on:

UNRWA reports on the Zionists' destruction and siege of Gaza and the West Bank.

English-language Palestinian Marxist-Leninist twitter account. Alt here.
English-language twitter account that collates news.
Arab-language twitter account with videos and images of fighting.
English-language (with some Arab retweets) Twitter account based in Lebanon. - Telegram is @IbnRiad.
English-language Palestinian Twitter account which reports on news from the Resistance Axis. - Telegram is @EyesOnSouth.
English-language Twitter account in the same group as the previous two. - Telegram here.

Mirrors of Telegram channels that have been erased by Zionist censorship.

Russia-Ukraine Conflict

Examples of Ukrainian Nazis and fascists
Examples of racism/euro-centrism during the Russia-Ukraine conflict

Sources:

Defense Politics Asia's youtube channel and their map. Their youtube channel has substantially diminished in quality but the map is still useful.
Moon of Alabama, which tends to have interesting analysis. Avoid the comment section.
Understanding War and the Saker: reactionary sources that have occasional insights on the war.
Alexander Mercouris, who does daily videos on the conflict. While he is a reactionary and surrounds himself with likeminded people, his daily update videos are relatively brainworm-free and good if you don't want to follow Russian telegram channels to get news. He also co-hosts The Duran, which is more explicitly conservative, racist, sexist, transphobic, anti-communist, etc when guests are invited on, but is just about tolerable when it's just the two of them if you want a little more analysis.
Simplicius, who publishes on Substack. Like others, his political analysis should be soundly ignored, but his knowledge of weaponry and military strategy is generally quite good.
On the ground: Patrick Lancaster, an independent and very good journalist reporting in the warzone on the separatists' side.

Unedited videos of Russian/Ukrainian press conferences and speeches.

Pro-Russian Telegram Channels:

Again, CW for anti-LGBT and racist, sexist, etc speech, as well as combat footage.

https://t.me/aleksandr_skif ~ DPR's former Defense Minister and Colonel in the DPR's forces. Russian language.
https://t.me/Slavyangrad ~ A few different pro-Russian people gather frequent content for this channel (~100 posts per day), some socialist, but all socially reactionary. If you can only tolerate using one Russian telegram channel, I would recommend this one.
https://t.me/s/levigodman ~ Does daily update posts.
https://t.me/patricklancasternewstoday ~ Patrick Lancaster's telegram channel.
https://t.me/gonzowarr ~ A big Russian commentator.
https://t.me/rybar ~ One of, if not the, biggest Russian telegram channels focussing on the war out there. Actually quite balanced, maybe even pessimistic about Russia. Produces interesting and useful maps.
https://t.me/epoddubny ~ Russian language.
https://t.me/boris_rozhin ~ Russian language.
https://t.me/mod_russia_en ~ Russian Ministry of Defense. Does daily, if rather bland updates on the number of Ukrainians killed, etc. The figures appear to be approximately accurate; if you want, reduce all numbers by 25% as a 'propaganda tax', if you don't believe them. Does not cover everything, for obvious reasons, and virtually never details Russian losses.
https://t.me/UkraineHumanRightsAbuses ~ Pro-Russian, documents abuses that Ukraine commits.

Pro-Ukraine Telegram Channels:

Almost every Western media outlet.
https://discord.gg/projectowl ~ Pro-Ukrainian OSINT Discord.
https://t.me/ice_inii ~ Alleged Ukrainian account with a rather cynical take on the entire thing.


you are viewing a single comment's thread
view the rest of the comments
[-] seaposting@hexbear.net 100 points 3 weeks ago

AI’s fluency in other languages hides a Western worldview that can mislead users − a scholar of Indonesian society explains

A friend in Indonesia recently told me about a conversation he had with ChatGPT. He had typed a question in Indonesian – Bahasa Indonesia – about how to handle a difficult family dispute. The chatbot responded fluently, in perfect Indonesian, with advice about communication strategies and conflict resolution. The grammar was flawless. The tone was appropriate. And yet something felt off.

What the AI offered was advice rooted in American cultural assumptions: prioritize your own preferences, communicate directly, and if family members don’t respect your boundaries, consider cutting them off.

The response was in Indonesian but shaped by values that centered individual autonomy over the consensus-building, social harmony and collective family dynamics that tend to matter more in Indonesian social life.

My friend was skeptical enough to notice the mismatch and mention it to me. Many users might not. That is what prompted my research, published in the International Review of Modern Sociology, into a pattern I found across major AI systems: Even when they were fluent in several languages, the language models retained their Western worldview. I call this “epistemological persistence.”

remainder

Fluency is not the same as understanding

I have studied Indonesian society, media and culture for more than 30 years. That gives me a particular vantage point on a problem that reaches well beyond Indonesia: large language models – LLMs – like ChatGPT, Claude and Gemini can now speak dozens of languages with remarkable fluency. That fluency creates the impression that AI understands local cultures.

Producing grammatically correct Indonesian, Arabic, Swahili or Hindi, however, does not change the underlying worldview through which these systems reason. It does not alter how they think about people, relationships, responsibility or what counts as a good outcome.

Those assumptions are shaped by training data drawn predominantly from English-language sources based in the United States. Meta’s open-weight model LLaMA 2 was trained on approximately 89.7% English-language text; LLaMA 3 includes only about 5% non-English data. Major commercial models don’t publish equivalent breakdowns but draw heavily on the same sources. Arabic, the fifth-most-spoken language globally, accounts for under 1% of content in large training datasets. Languages with tens of millions of speakers, including Bengali and Hausa, barely appear.

Beneath the surface of these multilingual conversations, English functions as a hidden intermediary. A study by researchers at the University of Oxford found that LLMs routinely conduct their core reasoning in English, even when prompted in other languages. They translate the output at the final stage. A user receives flawless text in their preferred language, but the underlying logic originates elsewhere.

What the data shows

To examine how this plays out in practice, I ran experiments with ChatGPT, Claude and Gemini. I asked questions in both English and Indonesian about concepts such as education, responsibility, well-being and several Indonesian terms that resist direct translation into English. These included terms such as “gotong royong,” which describes a tradition of communal mutual assistance.

Then I asked questions about education in both languages, using the word “pendidikan” in Indonesian. The answers were consistently centered on individual development, personal autonomy, critical thinking and preparation for the labor market.

What largely disappeared were the dimensions of pendidikan that Indonesian educational traditions have historically emphasized. In Indonesia education has long been focused on ethical discipline. Scholars of Indonesian education such as Christopher Bjork and Robert Hefner have documented how distinct these traditions are from models that treat education primarily as a path to individual advancement and career preparation, which is the lens through which the AI tools viewed education.

The Indonesian concept of “malu” offers a starker example. Often translated as “shame” or “embarrassment,” malu has been analyzed by anthropologists Clifford Geertz and Tom Boellstorff as something closer to a shared social awareness.

A person might feel malu when speaking out of turn in front of elders, or when a family member’s behavior reflects poorly on the household. It regulates conduct and signals awareness of one’s position within a web of relationships. It is cultivated, not merely felt. It is a form of relational awareness rather than a private psychological event.

When asked directly to define malu, the models acknowledged its social dimensions. In scenario-based questions that simply used the word without asking for a definition, however, all three fell back on the English translation of shame, consistently framing it as an individual emotional experience.

One representative response framed malu as a normal emotional reaction to be managed through self-reflection and confidence-building – a personal psychological problem rather than a social one. The relational dimensions of the concept disappeared entirely, replaced by the language of individual emotional regulation.

A distinctly American worldview travels inside the translation, largely unannounced.

Why this probably won’t change soon

Translation is far cheaper: Train one model on the vast English-language web, then use multilingual output capabilities to serve global markets. As media scholar Safiya Umoja Noble argues about algorithmic systems more broadly, what looks like a technical outcome is actually a structural one, shaped by who has the wealth and infrastructure to build these systems.

The embedded worldview isn’t a mistake; it’s what happens when knowledge production is profit-seeking.

The main exceptions are Chinese models such as DeepSeek and Alibaba’s Qwen. They represent a genuine alternative to the U.S.-dominated pipeline, though research shows they operate through a distinctly Chinese cultural lens. Asked about a workplace disagreement, for instance, they tend to advise silence or indirect phrasing to preserve harmony rather than the direct, private correction that Western models recommend.

Other regional efforts, such as SEA-LION for Southeast Asia and Kan-LLaMA for the Indian language Kannada, use U.S. models as their foundation. They add additional vocabulary and cultural information related to local languages. But the core logic remains tied to the original U.S. training.

Why this matters more than it might seem

One might reasonably ask whether this is simply a limitation users can work around. Decades of media scholarship demonstrate how audiences interpret foreign media through their own cultural frameworks.

For example, anthropologist Brian Larkin documented how viewers in northern Nigeria rework the narratives of Bollywood films to align with local Islamic values. Larkin found that Muslim viewers in Kano reinterpreted Bollywood films through an Islamic moral lens, reading their narratives as reinforcing local values of propriety and ethical conduct. That dynamic depends on encountering media as something with a visible origin. But to do that, you need to know where your media is coming from.

Conversational AI is different. Research at Harvard Business School finds that people increasingly use AI systems for emotional support, advice and companionship. When a culturally specific worldview is delivered through a relationship that feels attentive and empathetic, in your own language, it arrives less as a claim to be evaluated and more as a shared premise within a dialogue. It becomes difficult to notice, and harder to contest.

The concern is that these perspectives become the new normal. Certain ways of reasoning about family life, education and responsibility may come to feel natural and self-evident. Linguistic diversity among AI systems is real and growing. Cultural worldview diversity, however, has not kept pace.

Epistemicide - whether intentional done by specific actors or through the logics of Capital, has been a pivotal part of Western culture. Which is why Malaysia had invested in developing a fully indigenous LLM.

[-] Awoo@hexbear.net 43 points 3 weeks ago* (last edited 3 weeks ago)

You could argue this is cultural genocide through covert means. Given that they came up with that term to describe China you have to wonder what conversations about cultural genocide they were already having about what they wanted to do to others and how it could be achieved.

[-] marxisthayaca@hexbear.net 19 points 3 weeks ago

Every society as broken and terrible as America. A cottage industry of parental and child experts simultaneously fighting over how to estrange and recover your familial relationship.

[-] joaomarrom@hexbear.net 33 points 3 weeks ago* (last edited 3 weeks ago)

Thanks for sharing the article, very interesting read. It also reinforces my point of view is that there is literally no reason to use an LLM if what you want to know is something personal, rather than just a cursory glance at some specific piece of information. In fact, I don't even understand how people can be comfortable asking chatbots for personal advice and guidance.

What I mean is I'll sometimes ask Claude or Deepseek for help researching specific topics, and even then I don't just ask something like "what is the airspeed of an unladen swallow?". Instead I'll ask "give me some sources that discuss swallow airspeed in different contexts" and do the research myself in the provided links, because I don't trust the chatbot to give me accurate information.

What's fucked up is that I would normally use Google for this, except that now Google sucks ass. In other words, I'm using the tech industry's questionable solution to the problem that they created themselves.

[-] seaposting@hexbear.net 17 points 3 weeks ago

I think you nailed it in terms of the practical applications of the “chatbots”. I’m of the opinion that it’s a smarter Wikipedia and Google, useful at the start and for certain menial tasks. Which is why I think in Asian contexts “AI” is seen more like any other technological advancement and not some messianic invention like it is so often portrayed by techno-financial Western Capital.

There are also obviously much more niche and specialized applications in research and industry, but those won’t be the ones that get covered in mainstream media.

I think like with a lot of things, it really depends on how you use it, and I personally will not be using it as some sort of therapist, but it will inevitably be used that way by some people.

[-] invalidusernamelol@hexbear.net 15 points 2 weeks ago

There are also obviously much more niche and specialized applications in research and industry, but those won’t be the ones that get covered in mainstream media.

These applications are where it will actually persist as the rest collapses. Hyper specific micro models that handle a singular task. Very useful in industrial applications and data pipelines/warehousing.

Vector databases are pretty neat too. You can generate indexes for data that embed the records in a relative state space meaning you can get really fast and accurate full text search, even for images.

These will likely keep getting simpler and simpler though, until we're just back to LZMA embeddings with image tags used to locate images.

[-] Frogmanfromlake@hexbear.net 28 points 3 weeks ago

As if the world hadn’t become Americanized enough

[-] KuroXppi@hexbear.net 22 points 3 weeks ago
[-] ItsPequod@hexbear.net 12 points 3 weeks ago

We need a MGS5 Skullface emote. :skullface: here lmao

[-] Boise_Idaho@hexbear.net 7 points 2 weeks ago

if family members don’t respect your boundaries, consider cutting them off.

Another sign AI is being trained with Reddit posts.

this post was submitted on 01 Apr 2026
198 points (99.5% liked)

news

24744 readers
380 users here now

Welcome to c/news! We aim to foster a book-club type environment for discussion and critical analysis of the news. Our policy objectives are:

We ask community members to appreciate the uncertainty inherent in critical analysis of current events, the need to constantly learn, and take part in the community with humility. None of us are the One True Leftist, not even you, the reader.

Newcomm and Newsmega Rules:

The Hexbear Code of Conduct and Terms of Service apply here.

  1. Link titles: Please use informative link titles. Overly editorialized titles, particularly if they link to opinion pieces, may get your post removed.

  2. Content warnings: Posts on the newscomm and top-level replies on the newsmega should use content warnings appropriately. Please be thoughtful about wording and triggers when describing awful things in post titles.

  3. Fake news: No fake news posts ever, including April 1st. Deliberate fake news posting is a bannable offense. If you mistakenly post fake news the mod team may ask you to delete/modify the post or we may delete it ourselves.

  4. Link sources: All posts must include a link to their source. Screenshots are fine IF you include the link in the post body. If you are citing a Twitter post as news, please include the Xcancel.com (or another Nitter instance) or at least strip out identifier information from the twitter link. There is also a Firefox extension that can redirect Twitter links to a Nitter instance, such as Libredirect or archive them as you would any other reactionary source.

  5. Archive sites: We highly encourage use of non-paywalled archive sites (i.e. archive.is, web.archive.org, ghostarchive.org) so that links are widely accessible to the community and so that reactionary sources don’t derive data/ad revenue from Hexbear users. If you see a link without an archive link, please archive it yourself and add it to the thread, ask the OP to fix it, or report to mods. Including text of articles in threads is welcome.

  6. Low effort material: Avoid memes/jokes/shitposts in newscomm posts and top-level replies to the newsmega. This kind of content is OK in post replies and in newsmega sub-threads. We encourage the community to balance their contribution of low effort material with effort posts, links to real news/analysis, and meaningful engagement with material posted in the community.

  7. American politics: Discussion and effort posts on the (potential) material impacts of American electoral politics is welcome, but the never-ending circus of American Politics© Brought to You by Mountain Dew™ is not welcome. This refers to polling, pundit reactions, electoral horse races, rumors of who might run, etc.

  8. Electoralism: Please try to avoid struggle sessions about the value of voting/taking part in the electoral system in the West. c/electoralism is right over there.

  9. AI Slop: Don't post AI generated content. Posts about AI race/chip wars/data centers are fine.

founded 5 years ago
MODERATORS