view the rest of the comments
news
Welcome to c/news! We aim to foster a book-club type environment for discussion and critical analysis of the news. Our policy objectives are:
-
To learn about and discuss meaningful news, analysis and perspectives from around the world, with a focus on news outside the Anglosphere and beyond what is normally seen in corporate media (e.g. anti-imperialist, anti-Zionist, Marxist, Indigenous, LGBTQ, people of colour).
-
To encourage community members to contribute commentary and for others to thoughtfully engage with this material.
-
To support healthy and good faith discussion as comrades, sharpening our analytical skills and helping one another better understand geopolitics.
We ask community members to appreciate the uncertainty inherent in critical analysis of current events, the need to constantly learn, and take part in the community with humility. None of us are the One True Leftist, not even you, the reader.
Newcomm and Newsmega Rules:
The Hexbear Code of Conduct and Terms of Service apply here.
-
Link titles: Please use informative link titles. Overly editorialized titles, particularly if they link to opinion pieces, may get your post removed.
-
Content warnings: Posts on the newscomm and top-level replies on the newsmega should use content warnings appropriately. Please be thoughtful about wording and triggers when describing awful things in post titles.
-
Fake news: No fake news posts ever, including April 1st. Deliberate fake news posting is a bannable offense. If you mistakenly post fake news the mod team may ask you to delete/modify the post or we may delete it ourselves.
-
Link sources: All posts must include a link to their source. Screenshots are fine IF you include the link in the post body. If you are citing a Twitter post as news, please include the Xcancel.com (or another Nitter instance) or at least strip out identifier information from the twitter link. There is also a Firefox extension that can redirect Twitter links to a Nitter instance, such as Libredirect or archive them as you would any other reactionary source.
-
Archive sites: We highly encourage use of non-paywalled archive sites (i.e. archive.is, web.archive.org, ghostarchive.org) so that links are widely accessible to the community and so that reactionary sources don’t derive data/ad revenue from Hexbear users. If you see a link without an archive link, please archive it yourself and add it to the thread, ask the OP to fix it, or report to mods. Including text of articles in threads is welcome.
-
Low effort material: Avoid memes/jokes/shitposts in newscomm posts and top-level replies to the newsmega. This kind of content is OK in post replies and in newsmega sub-threads. We encourage the community to balance their contribution of low effort material with effort posts, links to real news/analysis, and meaningful engagement with material posted in the community.
-
American politics: Discussion and effort posts on the (potential) material impacts of American electoral politics is welcome, but the never-ending circus of American Politics© Brought to You by Mountain Dew™ is not welcome. This refers to polling, pundit reactions, electoral horse races, rumors of who might run, etc.
-
Electoralism: Please try to avoid struggle sessions about the value of voting/taking part in the electoral system in the West. c/electoralism is right over there.
-
AI Slop: Don't post AI generated content. Posts about AI race/chip wars/data centers are fine.
I find it interesting that Grok is in the twitter replies is telling chuds that attacks on farms in South Africa is not a genocide. Grok tells one of them that Elon is lying. I thought Elon had created this AI to spread his narratives but it is calling him a liar.
Grok is woke!!!!
Met Grok, She’s Woke
I wonder how hard it is to fix AI bugs.
If you train a model on data and it outputs in a way you don’t like, and that dislike that is linked to the data itself skewing your output, to fundamentally 'fix' it you have to tune the dataset yourself and retrain the model. On Grok's scale, that’s around a trillion tokens (morphemes, words, punctuation, etc.) that you need to sift through and decide what to manually edit or prune so that the statistics work in your favor whilst not otherwise fucking up its generalization.
If you publicly source said data/use other updating datasets as dependencies and choose to continue publicly sourcing it in further updates (i.e., keeping your model up to date with the current landscape), then if you want to tune an opinion/view out of existence, it is going to be a Sisyphean task.
There's a bandage solution which is fucking with the system prompt, but LLMs are inherently leaky and need active patchwork in such cases to combat jailbreaking.
It’s inherently biased by being trained on a dataset generally made up of true things
Basically impossible, they're inherently biased by the data they learn from but once they're done training there you can't tinker with the system itself, only try to impose external constraints that have unlimited potential to be breached
so the libs use it more