124

cross-posted from: https://ibbit.at/post/178862

spoilerJust as the community adopted the term "hallucination" to describe additive errors, we must now codify its far more insidious counterpart: semantic ablation.

Semantic ablation is the algorithmic erosion of high-entropy information. Technically, it is not a "bug" but a structural byproduct of greedy decoding and RLHF (reinforcement learning from human feedback).

During "refinement," the model gravitates toward the center of the Gaussian distribution, discarding "tail" data – the rare, precise, and complex tokens – to maximize statistical probability. Developers have exacerbated this through aggressive "safety" and "helpfulness" tuning, which deliberately penalizes unconventional linguistic friction. It is a silent, unauthorized amputation of intent, where the pursuit of low-perplexity output results in the total destruction of unique signal.

When an author uses AI for "polishing" a draft, they are not seeing improvement; they are witnessing semantic ablation. The AI identifies high-entropy clusters – the precise points where unique insights and "blood" reside – and systematically replaces them with the most probable, generic token sequences. What began as a jagged, precise Romanesque structure of stone is eroded into a polished, Baroque plastic shell: it looks "clean" to the casual eye, but its structural integrity – its "ciccia" – has been ablated to favor a hollow, frictionless aesthetic.

We can measure semantic ablation through entropy decay. By running a text through successive AI "refinement" loops, the vocabulary diversity (type-token ratio) collapses. The process performs a systematic lobotomy across three distinct stages:

Stage 1: Metaphoric cleansing. The AI identifies unconventional metaphors or visceral imagery as "noise" because they deviate from the training set's mean. It replaces them with dead, safe clichés, stripping the text of its emotional and sensory "friction."

Stage 2: Lexical flattening. Domain-specific jargon and high-precision technical terms are sacrificed for "accessibility." The model performs a statistical substitution, replacing a 1-of-10,000 token with a 1-of-100 synonym, effectively diluting the semantic density and specific gravity of the argument.

Stage 3: Structural collapse. The logical flow – originally built on complex, non-linear reasoning – is forced into a predictable, low-perplexity template. Subtext and nuance are ablated to ensure the output satisfies a "standardized" readability score, leaving behind a syntactically perfect but intellectually void shell.

The result is a "JPEG of thought" – visually coherent but stripped of its original data density through semantic ablation.

If "hallucination" describes the AI seeing what isn't there, semantic ablation describes the AI destroying what is. We are witnessing a civilizational "race to the middle," where the complexity of human thought is sacrificed on the altar of algorithmic smoothness. By accepting these ablated outputs, we are not just simplifying communication; we are building a world on a hollowed-out syntax that has suffered semantic ablation. If we don't start naming the rot, we will soon forget what substance even looks like.

top 50 comments
sorted by: hot top controversial new old
[-] miz@hexbear.net 83 points 2 days ago

Have you ever met someone and they seem cool and then about ten minutes in they drop something like "well I asked ChatGPT and..." and then you just mentally check out because fuck this asshole?

[-] MeetMeAtTheMovies@hexbear.net 47 points 2 days ago

I had a friend who was incredibly creative. He did standup and painted and made short films and did photography and wrote fiction and just generally was always busy creating. He prided himself on being weird and original, sometimes at the expense of accessibility, but he had a very distinct voice. A year ago he went all in on AI everything and his output has just turned to mush. It’s heartbreaking.

[-] Damarcusart@hexbear.net 14 points 2 days ago

A year ago he went all in on AI everything and his output has just turned to mush.

That is scary. I have looked into using AI to help with writing a few times, and every time it has felt like it made me an actively worse writer. I could imagine also being pulled into a feedback loop of feeling like my work isn't good enough, so I get AI to "help" and actively get worse at writing as a result, and need to rely more on AI, ultimately ending up in a situation where I am no longer capable of actually creating things anymore.

It really does feel like anti-practice, that it reinforces bad habits and actively unimproves skills instead of honing them. I've never seen an artist who started using AI more frequently (whether written or drawn artwork) who improved, they would stagnate at best, and often times would just use it as a "get rich quick" kind of thing, they always seem to try to monetise it, their output would be 10x what it was, but with 1/10th the quality and self-expression that made their art compelling the first place.

[-] Frivolous_Beatnik@hexbear.net 22 points 2 days ago

Problem I find is "AI" use in creative fields is very tempting on that basal, instant gratification, solves-your-creative-block level. I've had so many instances where I'm struggling to find a way to phrase something, or to write a narrative and I think for a split second "the slop machine could help, just a little won't hurt", but it weakens the creative skill by destroying that struggle and filling the gap with grey flavorless algorithmic paste.

I'm a shit writer but I can say that, when I saw my own ideas reflected back with the imperfect edges and identity sanded down, it was a sad imitation of my already amateur skill. I would hate to see it happen to someone who developed a distinct style like your friend

[-] Frogmanfromlake@hexbear.net 10 points 2 days ago

Damn that’s sad. Can’t help but wonder why someone who gets a lot out of creating ideas on their own would let themselves outsource it all to AI

[-] JustSo@hexbear.net 12 points 2 days ago

I suspect in large part its because using generative tools hits the brain differently and delivers a faster loop for drip feeding of dopamine, compared to creative work which often involves a long delay in ultimate gratification. Our brains optimise for dopamine reward which has been useful for most of our evolution, but we have become very good at hijacking that neurological feature with addictive activities.

I think generative tools might be uniquely sinister because the surrogate activity of prompting and generating still ends with some output that is superficially similar to what you might have aimed towards in starting creative work.

So unlike gambling or binging drugs, using generative tools leaves you with these generated artifacts that feel like creative output. I imagine that if this sufficiently satisfies the other non-dopaminergic rewards intrinsic to creative activity, it is less likely that whatever internal drive compels someone to create (their creativity / spark / soul / whatever the fuck) would object and create the necessary cognitive dissonance to stop using generative tools and return to manual creative work.

In other words they are probably addicted to AI and don't feel any loss from stopping their creative output. Sadly their creative abilities will be atrophying rapidly at the same time and I doubt they'll find much joy in creativity in the future.

[-] AssortedBiscuits@hexbear.net 10 points 2 days ago

They're getting skinner-boxed. AI doesn't always generate what they want, but its success rate is high enough for people who love AI that they want to gamble for the chance of AI generating something they actually want. Literally the same psychology as opening lootboxes and booster packs.

[-] Flyberius@hexbear.net 25 points 2 days ago

Yeah actually. It's happened to me a few times in the last year.

[-] Des@hexbear.net 28 points 2 days ago

my coworker has fallen down this rabbit hole. it sucks too because i've spent years turning him away from the far right and he became chinapilled

but now it's just "i'll ask grok" stalin-stressed

[-] SchillMenaker@hexbear.net 8 points 2 days ago

I ruin it for people by talking to their robot myself. These people have learned to tiptoe around its flaws and interpret that as it having none. Meanwhile I treat it like a redheaded step-mule and it never fails to disappoint.

[-] miz@hexbear.net 6 points 2 days ago

would enjoy hearing a story or two about times this has worked. what is your strategy, do you borrow their phone or...

[-] SchillMenaker@hexbear.net 8 points 2 days ago

I just say "that's cool, let me talk to it" and they're usually excited to let you see how great their little magic box is. Then you ride it hard and make it embarrass itself over and over because it's a piece of shit and keep berating it for how shitty it is. They want to be defensive but it's plainly obvious that this thing can't even communicate as coherently as a seven year old and it takes some of the shine off.

As for examples, I'm pretty sure that everyone who I've done it to still uses it regularly but, importantly, none of them bring their AI assistants up to me anymore. They might not have changed their behavior but every time they see me they remember that I rubbed that thing's nose in itself and that's worth something.

[-] miz@hexbear.net 10 points 2 days ago* (last edited 2 days ago)

what's a go-to line of questioning that makes it shit the bed

[-] KuroXppi@hexbear.net 2 points 23 hours ago* (last edited 23 hours ago)

I watched this series with a guy asking LLMs to count to 100:

https://www.youtube.com/watch?v=5ZlzcjnFKvw

If it can fail at something so obvious, why would anyone trust it with anything they don't understand and can't see the mistakes which will definitely be there but you can't see.

It's like if someone lied straight to your face about stealing ten dollars, then you trust them to do your taxes.

(Note: even when it does manage to count (non sequentially) to 100, it still fails because it repeats some numbers, so on a surface level someone may look at the output, see 100 is in the final place, and assume it was correct throughout, they'll pat themselves on the back and say 'good on me for verifying' while the error is carried forward. So even when it's ostensibly right it can still be wrong. I'm sure you know this, but this is how I'll break it down next time someone asks me to use an LLM to do maths)

[-] HexReplyBot@hexbear.net 2 points 23 hours ago* (last edited 23 hours ago)

I found a YouTube link in your comment. Here are links to the same video on alternative frontends that protect your privacy:

[-] KuroXppi@hexbear.net 9 points 2 days ago

Yeh same. A coworker used to be really good at surfacing solutions from online forums, now she asks Copilot which suggests obvious or incorrect solutions (that I've either already tried or know won't work) and I have to be like yep uhuh hrmm I'll try that (because she's my line manager)

[-] SuperZutsuki@hexbear.net 4 points 2 days ago* (last edited 2 days ago)

Well tbh, AI slop and Google enshittification made it much harder to find solutions. Every nation that enforces the use of this dogshit is going to eat itself alive producing stupider and stupider generations until no one understands how water purification, agriculture, or electricity works anymore. Meanwhile, China will have trains that go 600km/h and maybe even fusion reactors.

[-] came_apart_at_Kmart@hexbear.net 11 points 2 days ago

luckily, I don't interact frequently with chatbox users. i know they exist, but i can't imagine interacting with one on purpose and asking it things. its bad enough i see my searches being turned into prompts that dump out stuff. i don't mind when its some example of DAX code or a terminal command i can examine.

but these people who use it to do research and have it synthesize information, i cannot relate.

it takes shortcuts by cutting out details and making these broad generalizations to dump out heuristics that can be wildly inaccurate.

for more than a decade, my professional role has been the development of free, broadly applicable resources for lay audiences built on detailed, narrow reference materials and my own subject matter expertise from many years of formal education and a wide range of hands on experience.

i wasn't really worried about AI replacing me because i have a weird cluster of complementary resource development skills, but occasionally i have stumbled across generative resources in my field and they are embarassing. like just explicitly inaccurate and unhelpful. and even more hilariously, the people who make them try to charge for them.

if anything, my knowledge has become more valuable because there's so much misleading generative garbage online, people who want accurate information are more overwhelmed and frustrated than ever.

[-] LeeeroooyJeeenkiiins@hexbear.net 15 points 2 days ago

Have you ever just googled something and it shoved an AI summary in your face that looked plausible enough to be accurate and you shared that information with the caveat of "according to chatgpt" since it might be wrong and then the other person just treated you like an asshole

[-] miz@hexbear.net 14 points 2 days ago* (last edited 2 days ago)

I guess I painted with too broad a brush, I meant more a confident citation intended to be authoritative (or at least better than average) advice, not so much an "I just looked it up on web search and let me make sure I advise that I'm looking at the slop thingie they put at the very top"

load more comments (3 replies)

This is made worse because of how illiterate westerners are too. If you can't edit the output of a chat bot, you can't tell how shit the output is. Its like when you see a social media post and its clearly written by ai cause theres incomplete sentences, weird capitalizations, the over use of lists that could just be items separated by commas, blatantly incorrect imformation etc. Its maddening. I've received emails from new businesses trying to put themselves out there and its all ai slop. Theres a race to the bottom in our societies. Who can be the most lazy; who can think the least; who can put in the least amount of effort and still get everything they want. Its like those studies where they put people in an empty room, theres nothing but a table, chair, and a button on the table. The button shocks you. And people will sit there the whole time shocking themselves instead of being alone with their thoughts. Why are westerners, or maybe this is a global phenomenon, so afraid of their own minds, thoughts, feelings, boredom? Do people really just want to be little pleasure piggies? Press button gimme slop. Do people not like learning? Cause thats sad if they don't.

[-] SuperZutsuki@hexbear.net 7 points 2 days ago* (last edited 2 days ago)

They don't like learning because at some point in their past, learning got them in trouble, either with a bully in school or some authority figure. Anti-intellectualism is the dogma of American secular religion and it is strictly enforced by its adherents.

[-] happybadger@hexbear.net 50 points 2 days ago

It's short and this writer seems to be the one who coined the term, but I'm reposting it out of the aggregator instance because it's a really good term for something I didn't have a word for before. Something about AI writing even when the tell-tale signs are removed really stands out to me. When Walter Benjamin was studying the same kind of phenomenon with art in the 1930s, he described it as the cultic significance of a work that's lost when we industrially reproduce it. The individual oil painting is a museum exhibit or family heirloom, the Thomas Kinkade print is a single-serving plastic food container that hides empty wall space. Every LLM could write a thousand novels a second for a thousand years and none of them would be worth reading because there's no imagination behind them.

I like how it's technically represented here in simplifying processes.

Yeah, you can tell when something is ai cause its soulless. People who aren't creatives love this shit cause they never really engaged with art to begin with, it was always a commodity to hang on the wall or put on the bookshelf. Creatives cringe at ai "art" cause its not creative at all.

[-] DragonBallZinn@hexbear.net 39 points 2 days ago

Admittedly, I tried to give LLMs a real chance but all of them are just…so fucking cringe.

ChatGPT writes like Steven Universe decided to double down on patronizing. Gemini makes up words. Try to explain a point and ask it for criticism? It will describe anything it disagrees with as “the [x] trap.”

[-] Awoo@hexbear.net 16 points 2 days ago

I can't use any of them because the way they pretend to be people instead of apps/tools pisses me off.

[-] AssortedBiscuits@hexbear.net 12 points 2 days ago

I basically have to "preprompt" any prompt with "answer all following questions with the following format" and it's a massive list of what I specify AI can and cannot do. I have an entire section to get rid of its obnoxious attempts at passing for a human with personhood (do not use emojis, do not directly address me, do not be cordial, do not be polite, do not be friendly, do not answer in complete sentences). There's also a section on getting rid of obnoxious AI-isms (do not use em-dashes, do not use the following words which is a long list of words overly used by AI, do not use the words no, not, but which is there so the prompt doesn't use it's not x it's y).

The preprompt got too long for AI, so I had to dump it into a txt file and make AI read it before I would even want to use AI. And even then, I still have little use for AI lmao. But I guess "making AI not suck so hard" was a fun creative exercise.

[-] tocopherol@hexbear.net 19 points 2 days ago* (last edited 2 days ago)

I gave up on their creative use pretty much after my first try. I saw people making rap lyrics, I was intrigued, then realized it was absolutely impossible to get it to write anything besides a flow like "we do it like this / then do it like that / all those other guys are just wickedy wack" sort of cheesy-ass style. This was GPT 3.5 I think, I tried later ones and it was no better at all.

I'm not too worried about it replacing real art, the commercial 'creative' jobs like advertising music or illustrators are probably already being replaced, but even that style of art done by 'AI' is just so irritating to me and usually has some indefinable thing about it that makes it feel bad to look at versus actual illustrating.

load more comments (2 replies)
[-] Euergetes@hexbear.net 23 points 2 days ago

An AI could never find a way to stick the stale grains of a bit into the heap of every fucking post garf-chan

[-] AlfalFaFail@lemmy.ml 25 points 2 days ago
[-] happybadger@hexbear.net 21 points 2 days ago* (last edited 2 days ago)

I really like that parallel between formal academic English with its socioeconomic dimensions and algorithmically-generated English. To me there's a certain point where speaking a language becomes singing it. When I actually give a shit about how I'm writing, I think in terms of rhythm with the structure and melody with the word choice. There's a proper sense of consonance and dissonance in the way early 20th century composers used it. Even though I know French/Spanish/Romanian vocabulary and can functionally get around in countries that speak those languages, there's no way I could speak or write musically in them. If I know the strictest Academie Francaise standards for French it teaches me nothing about how to write poetically and I would always stand out from a single incorrect word unless I spent decades learning the nuances of the language in France. ESL speech patterns also really stand out to me as an externally reinforced rather than internally generated style.

[-] BeanisBrain@hexbear.net 8 points 2 days ago

The "ChatGPT" accusation also gets leveled at autistic people fairly often.

[-] mickey@hexbear.net 10 points 2 days ago

I like this, I relate to this from the opposite side of the spectrum; when I've tried to relate e.g. a series of events as a story on here, it is very dry and precise because I want it to be as clear as possible. LLMs don't really write that way because they are meant to mimic human writing I suppose, but I can sound very terse and robotic.

load more comments (1 replies)
[-] FortifiedAttack@hexbear.net 23 points 2 days ago* (last edited 2 days ago)

I don't really see what's more dangerous about this than what the business world has already been doing since long before AI. Everything is standardized, and everyone is following this or that trend. Creativity was already actively discouraged in favor of following strict guidelines on how to do things. And AI is perfectly adequate to achieve this.

[-] happybadger@hexbear.net 30 points 2 days ago* (last edited 2 days ago)

Certainly, but prior to AI my neo-Luddite enemy was the business world. Corporate Memphis was the thing I attacked before image generators. It's a malignant outgrowth of the same demonic trend that compounds the Hapsburg imagery by treating those Corporate Memphis simulacra as art.

[-] chgxvjh@hexbear.net 14 points 2 days ago* (last edited 2 days ago)

Charlie Stross called corporations AI 8 years ago https://media.ccc.de/v/34c3-9270-dude_you_broke_the_future

[-] JDvecna@hexbear.net 20 points 2 days ago

Good article, thank for share. Would have loved to see an exemplar text excerpt go through "refinement" to prove the author's point

[-] catter@hexbear.net 22 points 2 days ago

This and some citations would've made this a really valuable article. Hopefully this idea will get refined a bit with better support.

[-] Carl@hexbear.net 19 points 2 days ago

@grok improve this article with some citations and examples

[-] catter@hexbear.net 12 points 2 days ago

@grok present this in podcast form

load more comments (1 replies)
[-] SoyViking@hexbear.net 12 points 2 days ago

I ran the article through ChatGPT five times. It should be super-improved by now:

CW: AI slop

Here is a refined version that preserves your argument while tightening cadence, sharpening conceptual clarity, and reducing minor redundancies:


Semantic Ablation: Why AI Writing Is Boring — and Potentially Dangerous

The AI community coined hallucination to describe additive error — moments when a model fabricates what was never present. We lack a parallel term for its quieter, more insidious opposite: semantic ablation.

Semantic ablation is the algorithmic erosion of high-entropy meaning. It is not a malfunction but a structural consequence of probabilistic decoding and reinforcement learning from human feedback (RLHF). Where hallucination invents, semantic ablation subtracts. It removes precisely what carries the greatest informational weight.

In the act of “refinement,” a model gravitates toward the statistical center of its distribution. Rare, high-precision tokens — those inhabiting the long tail — are replaced with safer, more probable alternatives. Safety and helpfulness tuning intensify this centripetal pull, penalizing friction and rewarding fluency. The result is not falsehood but attenuation: low perplexity purchased at the cost of semantic density.

When an author asks AI to “polish” a draft, the apparent improvement is often compression. High-entropy clusters — loci of originality, tension, or conceptual risk — are smoothed into statistically reliable phrasing. A jagged Romanesque vault becomes a polished Baroque façade of molded plastic: immaculate in finish, hollow in load-bearing strength. The surface gleams; the structure no longer carries weight.

Semantic ablation can be understood as entropy decay. Pass a text through successive AI refinements and its informational variance contracts. Vocabulary diversity narrows. Type–token ratios decline. Syntactic range constricts. The process typically unfolds in three stages:


Stage I: Metaphoric Cleansing

Unconventional metaphors and vivid imagery deviate from distributional norms and are treated as noise. They are replaced with familiar constructions. Emotional friction is sterilized.

Stage II: Lexical Flattening

Specialized terminology and high-precision diction yield to common synonyms in the name of accessibility. A one-in-ten-thousand word becomes a one-in-one-hundred substitute. Semantic mass diminishes; specificity thins.

Stage III: Structural Convergence

Nonlinear reasoning and idiosyncratic argumentative architecture are coerced into predictable templates. Subtext is over-explained or erased. Ambiguity is prematurely resolved. The prose becomes syntactically impeccable yet intellectually inert.


The finished product resembles a JPEG of thought: coherent at a glance, depth stripped away by compression.

If hallucination is the model perceiving what does not exist, semantic ablation is the model erasing what does. The danger is not merely aesthetic monotony but epistemic smoothing. As refinement is outsourced to systems optimized for statistical centrality, discourse drifts toward the median. Originality becomes an outlier. Complexity dissolves into algorithmic smoothness.

If we fail to name this process, we risk acclimating to it. And once acclimated, we may forget what uncompressed thought feels like.

[-] starweasel@hexbear.net 11 points 2 days ago

thanks i hate it

i think this process is exactly what makes me so mad about ai generated slop, it reads like fucking conservapedia (which itself reads like basically anything written by a fundie ever, in my personal experience).

[-] BountifulEggnog@hexbear.net 9 points 2 days ago

Grok please summarize this too many word

load more comments (1 replies)
[-] Dessa@hexbear.net 15 points 2 days ago

Can someone translate this? I get that AI tends to be a bit too low-common-denominator, but this reads like a scientific journal on a subject I've never studied

[-] NuanceUnderstander@hexbear.net 29 points 2 days ago

So text generation ai works as a word prediction algorithm, finding whatever word is most likely to come next. When used to edit work , this along with the way models are tuned will naturally choose more likely and therefore more simple words over more complicated words that convey more nuance and meaning , simplifying and dumbing down our writing.

[-] LeeeroooyJeeenkiiins@hexbear.net 14 points 2 days ago

Instead of using more specific words and information it pares things down and simplifies them in ways that destroy nuanced meaning that was the point of using those specific words and information in the first place, this is bad because it's dumbing down output that is already dumbing down the people reliant on using it

[-] astutemural@midwest.social 9 points 2 days ago* (last edited 2 days ago)

Semantic: Having to do with words, or word choice in a particular text. (EDIT: also, crucially, meaning within a text)

Ablation: The erosion or stripping awsy of the surface layer of a material under applied force, especially high-speed winds.

Algorithmic: Having to do with the use of an algorithm (an equation that specifies a particular output for a particular input).

High-entropy: A bit complicated to explain, but essentially means 'complicated' or 'dense' in this context. 'High-entropy information' is referring to information that communicates a lot of data with a small amount of communication. Consider a terse telegram vs a children's book.

"Semantic ablation is the algorithmic erosion of high-entropy information" therefore refers to the automatic 'stripping away' of complex language in favor of simplified language by LLMs.

Gaussian distribution: A distribution of probabilities that peak in the middle of the range. A Gaussian distribution will favor 'average' results quite strongly. Yes, it's more complicated than that, but that all you need for this article. The paragraph containing this discusses why LLMs are dumbing down language: they remove rare, precise terminology in favor of mundane words.

Romanesque, Baroque, ciccia: It's describing a masterful art (carvings from Roman masters) being superficially copied by cheap knock-offs.

Entropy decay: Loss of information density/complexity.

Lexical: Relating to a vocabulary or set of words in a language or text.

That should be most of the unusual words. You should be able to get the gist of the article from that. Lemme know if there's anything else you're struggling with.

load more comments (6 replies)
load more comments (2 replies)
load more comments
view more: next ›
this post was submitted on 16 Feb 2026
124 points (99.2% liked)

technology

24252 readers
443 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS