116
submitted 1 week ago* (last edited 1 week ago) by SeventyTwoTrillion@hexbear.net to c/news@hexbear.net

A reminder that as the US continues to threaten countries around the world, fedposting is to be very much avoided (even with qualifiers like "in Minecraft") and comments containing it will be removed.

Image is of a harbor in Tasiilak, Greenland.


NATO infighting? You love to see it, folks.

The latest incident of America's satrapies becoming increasingly unhappy about their mandated kowtowing involves, of all places, Greenland. As I'm sure most people here are aware, Greenland is an autonomous territory of Denmark with a degree of geopolitical and economic importance - the former due to its proximity to Russia, and the latter due to the proven and potential reserves of minerals that could be mined there. It's also been an odd fascination of Trump during his reign, now culminating in outright demands.

Trump has called for negotiations with Denmark to purchase Greenland, justifying this by stating that it would be safer from Russia and China under America's protection. Apparently, Norway's decision to not give him the Nobel Peace Prize further inflamed him (not that the Norweigan government decides who receives the prizes). He has also said that countries that do not allow him to make the decision - which not only includes Denmark, but also other European countries - will suffer increased tariffs by June, and that he has not ruled out a military solution.

This threat has led to much internal bickering inside the West, with European leaders stating they will not give in to Trump's demands, and even sending small numbers of troops to Greenland. The most bizarre part of this whole affair is that the US already basically has total military access and control over Greenland anyway, and has since the 1950s, when they signed an agreement with Denmark. There are already several US military facilities on Greenland, and B-52 bombers have famously flown in the vicinity of the island (and crashed into it with nuclear bombs in tow, in fact). Therefore, this whole event - in line with his all-performance, little-results presidency so far - seems to be largely about the theatrics of forcing the Europeans to continue to submit to his whims. I would not be surprised if they ultimately do sign a very imbalanced deal, though - the current European leadership is bound too tightly to the US to put up even half-hearted resistance.

This is all simultaneously occurring alongside the Canadian Prime Minister's visit to China in which longstanding sore spots in their bilateral relationship are being addressed, with China reducing tariffs on Canadian canola oilseeds, and Canada reducing tariffs on Chinese electric vehicles, as well as currency swaps between their central banks, among many other things. It seems no accident that Canada's reconsideration of their relationship with China is occurring as Trump has made remarks about turning Canada into the next US state, as well as the demand for the renegotiation of the USMCA.


Last week's thread is here.
The Imperialism Reading Group is here.

Please check out the RedAtlas!

The bulletins site is here. Currently not used.
The RSS feed is here. Also currently not used.

The Zionist Entity's Genocide of Palestine

If you have evidence of Zionist crimes and atrocities that you wish to preserve, there is a thread here in which to do so.

Sources on the fighting in Palestine against the temporary Zionist entity. In general, CW for footage of battles, explosions, dead people, and so on:

UNRWA reports on Israel's destruction and siege of Gaza and the West Bank.

English-language Palestinian Marxist-Leninist twitter account. Alt here.
English-language twitter account that collates news.
Arab-language twitter account with videos and images of fighting.
English-language (with some Arab retweets) Twitter account based in Lebanon. - Telegram is @IbnRiad.
English-language Palestinian Twitter account which reports on news from the Resistance Axis. - Telegram is @EyesOnSouth.
English-language Twitter account in the same group as the previous two. - Telegram here.

Mirrors of Telegram channels that have been erased by Zionist censorship.

Russia-Ukraine Conflict

Examples of Ukrainian Nazis and fascists
Examples of racism/euro-centrism during the Russia-Ukraine conflict

Sources:

Defense Politics Asia's youtube channel and their map. Their youtube channel has substantially diminished in quality but the map is still useful.
Moon of Alabama, which tends to have interesting analysis. Avoid the comment section.
Understanding War and the Saker: reactionary sources that have occasional insights on the war.
Alexander Mercouris, who does daily videos on the conflict. While he is a reactionary and surrounds himself with likeminded people, his daily update videos are relatively brainworm-free and good if you don't want to follow Russian telegram channels to get news. He also co-hosts The Duran, which is more explicitly conservative, racist, sexist, transphobic, anti-communist, etc when guests are invited on, but is just about tolerable when it's just the two of them if you want a little more analysis.
Simplicius, who publishes on Substack. Like others, his political analysis should be soundly ignored, but his knowledge of weaponry and military strategy is generally quite good.
On the ground: Patrick Lancaster, an independent and very good journalist reporting in the warzone on the separatists' side.

Unedited videos of Russian/Ukrainian press conferences and speeches.

Pro-Russian Telegram Channels:

Again, CW for anti-LGBT and racist, sexist, etc speech, as well as combat footage.

https://t.me/aleksandr_skif ~ DPR's former Defense Minister and Colonel in the DPR's forces. Russian language.
https://t.me/Slavyangrad ~ A few different pro-Russian people gather frequent content for this channel (~100 posts per day), some socialist, but all socially reactionary. If you can only tolerate using one Russian telegram channel, I would recommend this one.
https://t.me/s/levigodman ~ Does daily update posts.
https://t.me/patricklancasternewstoday ~ Patrick Lancaster's telegram channel.
https://t.me/gonzowarr ~ A big Russian commentator.
https://t.me/rybar ~ One of, if not the, biggest Russian telegram channels focussing on the war out there. Actually quite balanced, maybe even pessimistic about Russia. Produces interesting and useful maps.
https://t.me/epoddubny ~ Russian language.
https://t.me/boris_rozhin ~ Russian language.
https://t.me/mod_russia_en ~ Russian Ministry of Defense. Does daily, if rather bland updates on the number of Ukrainians killed, etc. The figures appear to be approximately accurate; if you want, reduce all numbers by 25% as a 'propaganda tax', if you don't believe them. Does not cover everything, for obvious reasons, and virtually never details Russian losses.
https://t.me/UkraineHumanRightsAbuses ~ Pro-Russian, documents abuses that Ukraine commits.

Pro-Ukraine Telegram Channels:

Almost every Western media outlet.
https://discord.gg/projectowl ~ Pro-Ukrainian OSINT Discord.
https://t.me/ice_inii ~ Alleged Ukrainian account with a rather cynical take on the entire thing.


you are viewing a single comment's thread
view the rest of the comments
[-] AlHouthi4President@lemmy.ml 16 points 1 week ago
[-] kleeon@hexbear.net 32 points 1 week ago

My magic 8 ball also says yes

[-] AlHouthi4President@lemmy.ml 11 points 1 week ago

Is there something wrong with asking Qwen questions?

[-] kleeon@hexbear.net 16 points 1 week ago

no, we use LLMs at work to write diss tracks about each other. It's a toy

[-] AlHouthi4President@lemmy.ml 12 points 1 week ago

What is a treat for you (your data is training chatgpt models that will be used by the pentagon) is helping rural children in China access quality pediatric medical treatment that they otherwise would not.

If one is going to use AI then we can at least be responsible in which one we use.

[-] kleeon@hexbear.net 32 points 1 week ago* (last edited 1 week ago)

holy shit they're using LLM for decision-making at a HOSPITAL? That's terrifying

[-] yuritopia@hexbear.net 29 points 1 week ago

I work at one (not in China though). I've watched doctors use chat-GPT on their phone to look up drug dosages for the patient

[-] kleeon@hexbear.net 27 points 1 week ago* (last edited 1 week ago)

yeah I work in IT and I'm seeing the same kind of insanity. Just yesterday I asked DeepSeek to write me a short piece of code and it spat out something that compiles and would probably even work for a while (until it inevitably breaks), but it had a bunch of subtle mistakes having to do with multithreading that an inexperienced programmer wouldn't even think about. And most people seem to trust these things unquestionably

[-] AlHouthi4President@lemmy.ml 13 points 1 week ago

holy shit they’re using LLM for decision-making at a HOSPITAL? That’s terrifying

Clearly the Chinese government and health professionals care about the safety of their people so they must have implemented a rigorous method of testing the safety of this before deployment.

Someone might be curious and ask what is going on here, what is happening differently from genocideGPT?

Burger logic just assumes the Chinese are stupid.

[-] FunkyStuff@hexbear.net 23 points 1 week ago* (last edited 1 week ago)

DeepSeek is still stochastic and fundamentally based on creating plausible natural language. It can't actually think about the correctness of the information it's providing healthcare workers to make clinical decisions, it's a predictive text machine. They really are just using the wrong tool for the job, even if it's improving metrics it's a mistake.

[-] AlHouthi4President@lemmy.ml 5 points 1 week ago

You can just write "Stupid Chinese" instead if thats what you believe.

[-] FunkyStuff@hexbear.net 18 points 1 week ago

Look, I'm sure that it's somewhat useful to have DeepSeek help in some of the menial tasks involved in running a hospital and I don't doubt that China has an economic and political system that's inherently quite good at being dynamic about using new technologies. I'm not criticizing them in general. But you can read the article you linked and see how it shows they're using DeepSeek to help make clinical decisions and you can research how an LLM works to see why I'm saying what I'm saying, it's not a machine that's appropriate for that task.

[-] AlHouthi4President@lemmy.ml 5 points 1 week ago* (last edited 1 week ago)

This is in the abstract. They are identifying weaknesses while also not outright rejecting a technology that is clearly providing some benefit.

The absence of a well-defined liability framework underscores the need for policies that ensure AI functions as an assistive tool rather than an autonomous decision-maker. With continued technological advancements, AI is expected to integrate multimodal data sources, such as genomics and radiomics, paving the way for precision medicine and personalized treatment strategies. The future of AI in healthcare depends on the development of transparent regulatory structures, industry collaboration, and adaptive governance frameworks that balance innovation with responsibility, ensuring equitable and effective AI-driven medical services.

I just dont think it is reasonable to blanket criticize a technology and also to assume that the professionals in China and the regulatory bodies are not aware of its limitations.

I would be interested in what is being done and learning more about how it is being used.

I am frustrated because we cannot even get to the stage of curiosity and discovery in a conversation instead stuck at the premise.

[-] FunkyStuff@hexbear.net 13 points 1 week ago

I just dont think it is reasonable to blanket criticize a technology and also to assume that the professionals in China and the regulatory bodies are not aware of its limitations.

I don't assume that at all, I'm sure they're aware and I'm sure the people in China who understand that LLMs are stochastic text generation engines are fighting this, while the companies with something to sell are going to come up with some language that can dress the problem at the heart of this issue. You can say that LLMs are getting all these multimodal tools to take more data into account and that the ethical concerns are being worked out, it doesn't nullify the contradiction in the middle.

LLMs are not artificial intelligence. AI is a hype term that's inappropriate in this context. You can't expect an LLM to assist doctors to do their jobs beyond doing clerical work (a task where it's still liable to hallucinate and get important information wrong, but failure is more easily detected and shouldn't directly lead to a poor outcome for a patient) because LLMs are incapable of presenting information to a user with fidelity; that's a task that's better served by a search engine that can actually direct you to the source of the information directly, not a stochastic text predictor that can sometimes mix in data from unrelated sources.

I can't think of why LLMs are better suited at helping a healthcare worker than a search engine, unless you value having short answers quickly much higher than having correct answers (and most LLMs give long, redundant answers to simple questions anyway because they start telling you how smart you are). I can't think of why you'd use them to do any task that requires analysis, research, or cross examining sources in general. They can't do those things, they exist to take a text prompt and write a response that sounds plausible.

I can't say I've talked with any Chinese healthcare workers but tech companies all around the world have been pushing their workers to use LLM tools to aid their efficiency in a context that's arguably more suitable for the strengths and weaknesses of LLMs (easy to catch bugs with tests, purely text based, already millions of repositories to train a model on, it's the domain of the companies making the models so they're better equipped to tailor their models for that task) and I have yet to meet any developer who actually thinks it's good for production code. Maybe it's good for prototyping and toy scripts. That's the most glowing praise I've heard.

[-] AlHouthi4President@lemmy.ml 4 points 1 week ago

LLMs are not artificial intelligence. AI is a hype term that’s inappropriate in this context.

I understand this, the technology is just a word bucket not magic.

I can’t think of why you’d use them to do any task that requires analysis, research, or cross examining sources in general.

LLM's are clearly an incredibly powerful technology. The americans are developing it to find new ways of extermination an capital accumulation.

Non-imperialist powers are developing it to defend their sovereignty and that threatens the imperial order. Why else did the zionists specifically assasinate) Iran experts in the field? The Islamic Republic is integrating large language models in order to improve agricultural yields in the face of climate change induced crop failures.

Use generative AI, dont use it, I dont care. I guarantee you that many hypocrites in this thread themselves use LLM while they rail against it. (Congratulations if you use Google search you gave genocidegemini your data.)

But if you use LLM I am saying then use a Chinese model so your data doesn't support israel. Thats my entire point here.

Instead we can't have a conversation without the same talking point being repeated. And people default back to using Gemini or whatever else is readily available.

This entire thread has left a very nasty taste in my mouth.

[-] FunkyStuff@hexbear.net 7 points 1 week ago

LLM's are clearly an incredibly powerful technology. The americans are developing it to find new ways of extermination an capital accumulation

I agree on both counts, but you may be overstating the actual capacity of the LLMs. They're incredibly powerful at manipulation because of how much propaganda they can pump out and how much more convincing they are than the previous generation of chatbot technology. They're also useful for weaponizing surveilance tech against large populations because, while faulty compared to having humans do the job, they're good enough to listen in on millions of people's conversations to aid in exterminating them (i.e. how they've been deployed by the Zionists as you say).

Non-imperialist powers are developing it to defend their sovereignty and that threatens the imperial order. Why else did the zionists specifically assasinate) Iran experts in the field? The Islamic Republic is integrating large language models in order to improve agricultural yields in the face of climate change induced crop failures.

I think in that context LLMs are really just for psychological warfare, which doesn't really have anything to do with using them to do research or analyze something. The article you linked on Iran using AI for farming didn't load so I checked out a PressTV article on the same topic; they're really just modernizing farming with a suite of different technologies that include LLMs but also drones and IoT. I'm not gonna comment on how effective I suspect the LLMs are gonna be for growing and harvesting crops, but I guess maybe they save some time for writing the code for the other stuff? Again, this doesn't convince me that LLMs should be used to analyze information.

If your point is that using Chinese LLMs is better than using the American ones I don't disagree at all. I haven't been using LLMs for a bit but I switched to DeepSeek once it came out, that's why I'm confident when I say it's not fundamentally different and still makes the same kinds of hallucination mistakes. I'm sorry the conversation went this way, I think it was a pretty bad vibe seeing someone who has good analysis here talking about how they use an LLM, I imagine the reason you've had several people frustrated at you has nothing to do with whether the LLM is from Alibaba and more with how much AI slop fills up everywhere else online.

[-] AlHouthi4President@lemmy.ml 4 points 1 week ago

Knee-jerk moralizing

[-] Damarcusart@hexbear.net 6 points 1 week ago

The models they use for that are different and purpose built for it, they aren't chatbots. An LLM is great at sifting out patterns in large amounts of data, which can be used to detect things like cancer early, instead of just being used to create a simulacrum of human conversation. They aren't just opening up chatGPT and asking it what disease their patient has (or at least, I hope they aren't).

[-] FunkyStuff@hexbear.net 7 points 1 week ago* (last edited 1 week ago)

They're using software from DeepSeek to retrieve text instead of having a human read it, it's not just for analyzing large amounts of data.

At South China Hospital in Shenzhen, DeepSeek is used to retrieve clinical evidence for urology cases, reducing the time doctors spend on literature review.

I argue this is simply the wrong tool for the job. They're using the LLM to summarize text from the medical literature but that's a task that an LLM is always going to be inferior at (especially in the context of healthcare) because it's prone to mixing information from other sources and misinterpreting things. If you're telling it to summarize an article that explains how X protein interacts with Y pharmaceutical, it's not capable of actually looking up information about X and Y and synthesizing it to give you an informed conclusion like it's an actual expert, which is how the companies making these products sell it; instead it's giving you an answer that it generates based on just as much data from X protein and Y pharmaceutical as it is every single other protein and pharmaceutical in the database, but also every piece of fiction that it has a loose connection associating it to the tokens in the prompt or its partial response. They don't have any mechanism that can actually narrow down the vast data it's trained on to just the bit that's actually relevant and give you an answer from there, that's why they hallucinate so much.

That being said, I don't think it's bad to use LLMs to do things that you couldn't do otherwise like analyze a larger scale of data. This can open a whole can of worms because any conclusion it draws is just as fallible as the other problematic cases, but I trust that healthcare professionals can find a point where the pros outweigh the cons like with any other technology.

[-] Damarcusart@hexbear.net 6 points 1 week ago

Nevermind, seems I misinterpreted it incorrectly, I thought they were using the same sort of software, but purpose built, not a chatbot to summarize medical lit. I agree, this does not seem like a good tool for that job.

[-] FunkyStuff@hexbear.net 5 points 1 week ago

They do that too. There's good stuff in the article, there's some not-so-good stuff too.

[-] AlHouthi4President@lemmy.ml 10 points 1 week ago* (last edited 1 week ago)

Clearly people are using generative AI. I think its good to advertise on this kind of platform a Chinese model that is used to improve peoples lives and isnt being used in genocide.

I understand criticizing US generative AI models, but unlike US tech models, AliBaba model saves people lives. Chinese technology is being used primarily for social welfare rather than capital accumulation and extermination.

https://thechinaacademy.org/chinas-ai-is-saving-cancer-patients-missed-by-doctors/

[-] kleeon@hexbear.net 21 points 1 week ago

Chinese model that is used to improve peoples lives

wdym? Qwen is made by Alibaba corporation to make money

[-] AlHouthi4President@lemmy.ml 14 points 1 week ago

Yes it is.

Harnessing the power of market capitalism to advance the social good is the basis of the Chinese economic development model.

[-] Muinteoir_Saoirse@hexbear.net 17 points 1 week ago

Yeah, and besides, who cares about all the poor communities in Malaysia that have to have the data centres that train these chatbots? It's unverifiably beneficial to China, and that makes it morally superior to other techbro billionaire environmental disaster chatbots.

[-] Muinteoir_Saoirse@hexbear.net 20 points 1 week ago

Pretending AI run by a multibillion dollar global conglomerate is a social good is wild. A social good would be universal free health-care, not chatbot doctors.

[-] QinShiHuangsShlong@hexbear.net 4 points 1 week ago

Your argument relies on moral outrage and abstract ethics rather than material analysis. You frame the issue as data centres harm poor communities, corporations are bad, therefore AI is immoral. That is ethical idealism. A dialectical materialist approach instead asks who owns the technology, who controls the surplus, which class gains power from it, and how it transforms relations of production. Without those questions, the analysis never moves beyond surface impressions.

Calling something a multibillion-dollar conglomerate is not an analysis. The decisive issue is which state and which class structure directs it. A Chinese firm operating within China’s socialist market economy is part of a system defined by state planning, public ownership in commanding sectors, industrial policy, and long-term national development goals. This is not comparable to Silicon Valley venture capital, US defense-linked monopolies, or rent-seeking finance capital. The size of capital does not determine its class character, and treating all large-scale production as inherently capitalist ignores the actual structure of the Chinese system.

This specific example fits into the broader Chinese development model as a whole. That system has produced clear and measurable benefits for the Chinese people through rapid industrialization, infrastructure construction, rising living standards, and the elimination of absolute poverty. Internationally, it has helped create a new multipolar pole that weakens imperial monopoly over development financing and technology. Through the Belt and Road Initiative, China has enabled massive infrastructure construction across the Global South, including railways, ports, power generation, telecommunications, and logistics networks that Western capital refused to build because profit rates were too low.

Those outcomes are not ideological claims but material facts. Over 900 million people were lifted from poverty, China built the world’s largest high-speed rail network, expanded its national energy grid, upgraded its industrial base, and achieved a high degree of technological self-reliance. The BRI has provided long-term financing and physical infrastructure across Asia, Africa, and Latin America, helping countries escape dependence on IMF austerity and underdevelopment. This is development rooted in productive investment, not charity or branding.

The question of data centres in Malaysia is a separate issue and must be analyzed materially rather than morally. Infrastructure hosting is not exploitation in itself. What matters is whether it produces domestic employment, technology transfer, tax revenue, energy upgrades, and integration into higher stages of production. Those concrete relations determine whether such projects deepen dependency or contribute to development, not abstract condemnation of infrastructure as such.

Your idea of social good treats socialism as distribution without production. Universal healthcare cannot exist through moral assertion alone. It requires trained doctors, hospitals, logistics systems, energy supply, and industrial surplus. Industrial surplus requires advanced productive forces. China’s path was to build that material base first and then expand social provision on top of it, which is precisely why those programs became sustainable rather than rhetorical.

There is also a sharp irony in an Irish person(going off your username please correct me if I'm wrong) directing moral condemnation at the Chinese development model. Ireland has been governed for roughly a century by the Fianna Fáil–Fine Gael blueshirt uniparty, which has steadily sold out the Irish working class in the interests of foreign capital. The result is an economy structured around tax haven status for US multinationals, with little industrial sovereignty and minimal democratic control over production. While corporate profits soar on paper, living conditions deteriorate. The healthcare system remains in permanent crisis, homelessness continues to rise year after year, housing is treated as a speculative asset rather than a social necessity, and rent-seeking dominates large sections of the economy.

This situation persists in part because there is no real organized proletarian opposition capable of challenging the political consensus. Power circulates within the same narrow elite, allowing political failsons like Simon Harris to rise steadily through the state apparatus despite repeated incompetence. Billions of taxpayer euros are burned on disasters such as the National Children’s Hospital, emblematic of a system where public funds are privatized through mismanagement while accountability is nonexistent. At the same time, energy-intensive American data centres continue to expand across the country with minimal scrutiny.

In this context, condemning China’s development model rings hollow. China subordinates capital to national development through planning and state direction, while Ireland has subordinated society to capital under a neoliberal uniparty regime.

[-] Muinteoir_Saoirse@hexbear.net 3 points 1 week ago

I'm not a materialist, I hate chatbots. Simple as.

[-] QinShiHuangsShlong@hexbear.net 6 points 1 week ago

You should try materialism. Hating any technology in it's entirety is silly. Why hate the loom simply because the capitalist uses it to further exploit the workers. Hate the capitalist and work to retake the loom for the benefit of the people.

[-] Muinteoir_Saoirse@hexbear.net 3 points 1 week ago* (last edited 1 week ago)

I'll use material analysis where I feel it fits, and when it comes to chatbots I'll continue to think they're environmental nightmare slop that is ruining conversations and a completely pointless waste of time. But thanks for writing all that.

[-] QinShiHuangsShlong@hexbear.net 4 points 1 week ago

The foundation of a chatbot is at its core the same as many supremely useful AI technologies such as those used to diagnose cancer early. The chatbot incarnation of this technology is caused by the capitalist need to ever expand it's profit/rent seeking. This is exactly my loom point, just because capitalists use technology to do bad things doesn't make the technology bad.

[-] Muinteoir_Saoirse@hexbear.net 3 points 1 week ago

The chatbot incarnation of this technology is caused by the capitalist need to ever expand it's profit/rent seeking.

Yes and this is what I made fun of. The use of a shitty chatbot. What are you even trying to say? You keep arguing with things unrelated to my point: chatbots are fucking pointless, regardless of which shitty chatbot you use.

I don't care about some nebulous "good" alternative use of LLMs. That has literally zero to do with what I have said or replied to. I have been, always and every time, talking very specifically about how chatbots are stupid. You'll notice I didn't use the term AI, or LLM. I said chatbots. Every time. Because chatbots suck, and that is the single point I have been making. I have made no comments on other uses of LLM technology.

So again: I hate chatbots. Simple as.

[-] QinShiHuangsShlong@hexbear.net 3 points 1 week ago

Funny, your original comment framed it as AI being a “social good” problem, not chatbots specifically. You narrowed it down to hating chatbots after the discussion started. I'll agree on chatbots specifically but my wider point still stands: the capitalist conditions producing these tools shape how they appear and function. Chatbots are “stupid” in this incarnation because of profit-driven priorities, but that doesn’t make the underlying technology itself useless.

[-] Muinteoir_Saoirse@hexbear.net 3 points 1 week ago

I also didn't condemn Chinese development, nor did I defend Ireland?? I just hate chatbots, and I don't care who makes them.

[-] QinShiHuangsShlong@hexbear.net 3 points 1 week ago

That might be my bad, to me your “multibillion-dollar conglomerate” framing read like a very common 白左 shorthand that treats all large-scale production as inherently capitalist, a knee-jerk dismissal of Chinese socialism, I was pushing back against that.

[-] WokePalpatine@hexbear.net 7 points 1 week ago

Humans are able to spot LLM-speak better than LLMs because we're not word calculators. That Qwen analysis is vague bullshit.

[-] dat_math@hexbear.net 2 points 1 week ago
[-] HexReplyBot@hexbear.net 1 points 1 week ago

I found a YouTube link in your comment. Here are links to the same video on alternative frontends that protect your privacy:

[-] Muinteoir_Saoirse@hexbear.net 29 points 1 week ago

"you are an expert at generative AI analysis." Ah there we go, as long as we prompt the chatbot by telling it to be an expert at a task, that will ensure it is an expert at the task, eliminating any natural barriers it may have (for instance being an unthinking chatbot).

[-] Damarcusart@hexbear.net 10 points 1 week ago

SMH, these people getting chatbots to write articles for them don't just say "You are an expert writer who doesn't write like a chatbot" to them.

[-] AlHouthi4President@lemmy.ml 3 points 1 week ago

Yes it has its problems. Is there a better way to encourage a particular expert in the MoE models? I am open to learning better methods.

I included the whole context window for transparency, I figured this is best practice.

[-] Muinteoir_Saoirse@hexbear.net 25 points 1 week ago

Well everyone else in the comments was able to determine that this was written by a chatbot without asking another chatbot by using our own human thoughts. But sure, ask the chatbot and then try to guess whether or not it gave you useful information or spat out nonsense (by using your own human thoughts). That seems like a useful and not at all arbitrary intermediary step.

[-] AlHouthi4President@lemmy.ml 10 points 1 week ago

Youre just being passive aggressive for no reason. Please block me.

this post was submitted on 19 Jan 2026
116 points (99.2% liked)

news

24560 readers
654 users here now

Welcome to c/news! We aim to foster a book-club type environment for discussion and critical analysis of the news. Our policy objectives are:

We ask community members to appreciate the uncertainty inherent in critical analysis of current events, the need to constantly learn, and take part in the community with humility. None of us are the One True Leftist, not even you, the reader.

Newcomm and Newsmega Rules:

The Hexbear Code of Conduct and Terms of Service apply here.

  1. Link titles: Please use informative link titles. Overly editorialized titles, particularly if they link to opinion pieces, may get your post removed.

  2. Content warnings: Posts on the newscomm and top-level replies on the newsmega should use content warnings appropriately. Please be thoughtful about wording and triggers when describing awful things in post titles.

  3. Fake news: No fake news posts ever, including April 1st. Deliberate fake news posting is a bannable offense. If you mistakenly post fake news the mod team may ask you to delete/modify the post or we may delete it ourselves.

  4. Link sources: All posts must include a link to their source. Screenshots are fine IF you include the link in the post body. If you are citing a Twitter post as news, please include the Xcancel.com (or another Nitter instance) or at least strip out identifier information from the twitter link. There is also a Firefox extension that can redirect Twitter links to a Nitter instance, such as Libredirect or archive them as you would any other reactionary source.

  5. Archive sites: We highly encourage use of non-paywalled archive sites (i.e. archive.is, web.archive.org, ghostarchive.org) so that links are widely accessible to the community and so that reactionary sources don’t derive data/ad revenue from Hexbear users. If you see a link without an archive link, please archive it yourself and add it to the thread, ask the OP to fix it, or report to mods. Including text of articles in threads is welcome.

  6. Low effort material: Avoid memes/jokes/shitposts in newscomm posts and top-level replies to the newsmega. This kind of content is OK in post replies and in newsmega sub-threads. We encourage the community to balance their contribution of low effort material with effort posts, links to real news/analysis, and meaningful engagement with material posted in the community.

  7. American politics: Discussion and effort posts on the (potential) material impacts of American electoral politics is welcome, but the never-ending circus of American Politics© Brought to You by Mountain Dew™ is not welcome. This refers to polling, pundit reactions, electoral horse races, rumors of who might run, etc.

  8. Electoralism: Please try to avoid struggle sessions about the value of voting/taking part in the electoral system in the West. c/electoralism is right over there.

  9. AI Slop: Don't post AI generated content. Posts about AI race/chip wars/data centers are fine.

founded 5 years ago
MODERATORS