226
submitted 1 week ago* (last edited 1 week ago) by Pro@programming.dev to c/technology@lemmy.world

In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

(page 2) 39 comments
sorted by: hot top controversial new old
[-] 10001110101@lemm.ee 1 points 1 week ago* (last edited 1 week ago)

Kinda weird GPT4-Chan wasn't referenced. A guy fine-tuned GPT-J on 4chan, then deployed bots to write posts. I guess it was more of a stunt than academic or scientific, but training on 4chan improved the model's performance on a truthfulness benchmark.

[-] Grimy@lemmy.world 1 points 1 week ago

Those are actually some very good results. Funny situation, if the copyright companies win the AI legislative war, 4chan is going to get twice as much as reddit did for the data at the minimum.

It's also interesting the model gets worse faster if it has to untrain the toxic data so to speak.

[-] AeonFelis@lemmy.world 1 points 1 week ago

So basically... by being familiar with 4chan the model knows better what not to do?

[-] Grimy@lemmy.world 1 points 1 week ago

Yup. Sucks for everyone having fun jailbreaking them. It is going to get much harder.

[-] MTK@lemmy.world 1 points 1 week ago

Makes sense if you look at abliterated models. Once abliterated and retrained they seem to improve. Imo we are adding too much human bias by trying to guide the LLM. Censored models are good and need to be used in some situations, but shouldn't the base be just data and only then finetune to desired output?

[-] Naevermix@lemmy.world 0 points 1 week ago* (last edited 1 week ago)

I envision a Gemini powered bot that cracks captcha and posts "woke" replies on 4chan. If you're an antivaxxer, antisemite, nazi, racist, sionist, or otherwise, it will debate you. It will not get tired. It will not get mad. It will maintain a sense of decorum indefinitely and it will never ever stop. If some far right extremist decides to do the same, it will have the advantage that academia is left leaning, meaning the model can cite widely recognized studies.

Dead internet theory and so on, but I'll gladly completely and utterly destroy the internet if it means the filth dies with it.

[-] Disaster@sh.itjust.works 0 points 1 week ago

There's little evidence that debate changes people's ideas.

[-] prole@lemmy.blahaj.zone 1 points 1 week ago

Seems more about keeping the idiots occupied so they can't flood the zone with their bullshit

[-] cupcakezealot@lemmy.blahaj.zone 0 points 1 week ago

can we stop referring to llm's as if they're capable of thought? they don't make decisions; their programming just responds to patterns.

[-] MangoCats@feddit.it 1 points 1 week ago

Do you make decisions, or are you just 1300 grams of synapses responding to stimuli?

[-] technocrit@lemmy.dbzer0.com -1 points 1 week ago* (last edited 1 week ago)

Fresh "AI" pseudo-science for a monday morning.

These grifters never even define "bad/toxic data". It's just 4chan ffs.

load more comments
view more: ‹ prev next ›
this post was submitted on 09 Jun 2025
226 points (95.9% liked)

Technology

71626 readers
1714 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS