226
submitted 1 week ago* (last edited 1 week ago) by Pro@programming.dev to c/technology@lemmy.world

In large language model (LLM) pretraining, data quality is believed to determine model quality. In this paper, we re-examine the notion of "quality" from the perspective of pre- and post-training co-design. Specifically, we explore the possibility that pre-training on more toxic data can lead to better control in post-training, ultimately decreasing a model's output toxicity. First, we use a toy experiment to study how data composition affects the geometry of features in the representation space. Next, through controlled experiments with Olmo-1B models trained on varying ratios of clean and toxic data, we find that the concept of toxicity enjoys a less entangled linear representation as the proportion of toxic data increases. Furthermore, we show that although toxic data increases the generational toxicity of the base model, it also makes the toxicity easier to remove. Evaluations on Toxigen and Real Toxicity Prompts demonstrate that models trained on toxic data achieve a better trade-off between reducing generational toxicity and preserving general capabilities when detoxifying techniques such as inference-time intervention (ITI) are applied. Our findings suggest that, with post-training taken into account, bad data may lead to good models.

top 50 comments
sorted by: hot top controversial new old
[-] Reverendender@sh.itjust.works 103 points 1 week ago

I know everyone on Lemmy hates LLMs, but this is really interesting

[-] Sabin10@lemmy.world 80 points 1 week ago

I dislike that people are relying on them to do all their thinking for them while also being incredibly interested in the tech behind them.

[-] L0rdMathias@sh.itjust.works 32 points 1 week ago

I recently realized it's a non-issue. The people doing this have already been looking for decades to find new ways to rot their minds. LLMs are just the latest in a long line of tools that help them tune out.

[-] Plebcouncilman@sh.itjust.works 12 points 1 week ago

I’ve said this a few times in a different way and I always get downvoted. The fact is that the people who will use the LLMs to think for them, were not gonna think a lot in the first place.

[-] youCanCallMeDragon@lemmy.world 8 points 1 week ago

This is true, but we don’t need people putting glue on their pizza. These people used to have a person to ask now they’ll be asking Sam Altman

[-] Plebcouncilman@sh.itjust.works 3 points 1 week ago* (last edited 1 week ago)

Well I would make the argument that someone stupid enough to do such a thing kinda deserves whatever consequences their actions have. I find that people learn faster when actions have consequences instead of everything being babyproofed.

load more comments (2 replies)
load more comments (2 replies)
[-] balder1991@lemmy.world 2 points 1 week ago* (last edited 1 week ago)

Not when companies force them on you as well.

My current company forces me to use it and measures how many prompts I’m making as “productivity”.

load more comments (2 replies)
load more comments (1 replies)
[-] SculptusPoe@lemmy.world 30 points 1 week ago

I wish they would tone down the crusade. This is some of the most interesting technology to come out in decades.

[-] Reverendender@sh.itjust.works 29 points 1 week ago

It’s extremely useful for many things, if you know how to use it, and it’s annoying and useless for many others, which is what they fixate on and keep-jerk react to

[-] 4am@lemm.ee 20 points 1 week ago

It’s annoying that every middle manager is trying to become the hero of their company by pushing it inappropriately into every single field at the expense of productivity and jobs, while simultaneously the largest most powerful companies are slinging their SaaS solutions built on stolen data which are destroying communities of both the physical and hobby varieties and consuming more natural resources than all the fucking crypto scams of the last like 10 years

But yeah it’s neat I guess

load more comments (1 replies)
[-] IndiBrony@lemmy.world 6 points 1 week ago

My gf's employer was going into administration last month. AI was surprisingly competent in determining where to seek advice and had a decent understanding of what to expect and how to approach things such as not getting paid on time (which happened last week).

Of course, we double and triple checked any information given to us with the relevant bodies, but it provided a little relief to go into something so chilling not being completely clueless.

AI has its use, but you have to know how to extract the information you need.

It's stupid the way people are using it for therapy. Like, by all means ask it if it knows any organisations which can help you, then look those up, but don't tell it a load of personal information about your relationship, because the reply will be something akin to the advice you see on r/relationships (which is probably where it scraped its data from) 😅

load more comments (1 replies)
load more comments (2 replies)
[-] elbarto777@lemmy.world 27 points 1 week ago

This is a "guns don't kill people - people kill people" kind of scenario.

As a standalone thing, LLMs are awesome.

What sucks is greedy people using them for the wrong reasons.

It's like robots. Playing with robots are awesome. Firing 1,000 people and replacing them with robots - and not sharing the benefits with the community sucks.

[-] taladar@sh.itjust.works 4 points 1 week ago

As a standalone thing, LLMs are awesome.

They really aren't though and that is half the problem. Everyone pretends they are awesome when the results are unusable garbage 80% of the time which makes them unusable for 99% of practical applications.

[-] scrion@lemmy.world 10 points 1 week ago* (last edited 1 week ago)

Those numbers are baseless exaggerations. There are plenty of tasks which they solve perfectly, today. It's just that a bunch of dicks operate them, and the cost of operating them are way too high.

Also:

  • environmental impact of AI
  • unethical acquisition of training data
  • dichotomy of how conservative politics treat AI company and private copyright law
  • "undress AI" and deepfakes

It's not that they're not useful, that's just nonsense.

[-] taladar@sh.itjust.works 2 points 1 week ago

There are plenty of tasks which they solve perfectly, today.

Name a single task you would trust an LLM on solving for you that you feel confident would be correct without checking the output. Because that is my definition of perfectly and AI falls very, very far short of that.

load more comments (1 replies)
[-] balder1991@lemmy.world 2 points 1 week ago* (last edited 1 week ago)

That’s a bit too dismissive. I’ve had a lot of interesting chats with LLMs that led me to find out what I didn’t understand about something. As an example I’m reading a book explaining some practices of Structured Concurrency in Swift and many times I asked ChatGPT is the author is correct about some phrasing that seemed wrong to me. And ChatGPT was able to explain why that was right in that context.

load more comments (1 replies)
[-] bimbimboy@lemm.ee 25 points 1 week ago

I'm cool with it. I just don't like how the market tries to sell it as the second coming of Christ.

[-] pennomi@lemmy.world 13 points 1 week ago

“Don’t believe that marketing department“ is one of those things everybody needs to learn at some point in their life.

[-] bimbimboy@lemm.ee 4 points 1 week ago

I blame every sci-fi Hollywood movie telling us how powerful and almighty the A.I is. How it's going to be the magic pill that entirely destroys or saves humanity by itself.

Now we have an entire generation believing this crap.

[-] pennomi@lemmy.world 6 points 1 week ago

I mean, it still could be. But LLMs are not that AGI we’re expecting.

load more comments (1 replies)
[-] ShinkanTrain@lemmy.ml 4 points 1 week ago* (last edited 1 week ago)

You can blame Hollywood for a lot of things, including this, but sci-fi authors have been doing it for longer. That's where Hollywood took those stories from in the first place.

[-] logicbomb@lemmy.world 11 points 1 week ago* (last edited 1 week ago)

This is the same market that tried to add blockchain to everything when that first became well-known.

Some of the biggest forces in the market are extraordinarily stupid people trying to ride every buzzword that comes along.

[-] bimbimboy@lemm.ee 2 points 1 week ago

Some of the biggest forces in the market are extraordinarily stupid people trying to ride every buzzword that comes along.

I think the biggest forces sell the fantasy to smaller forces. This way they can capitalize on the smaller forces believing the hype.

[-] AnAverageSnoot@lemmy.ca 6 points 1 week ago

I don't dislike LLMs, I dislike people who treat them as anything more than an advanced search engine and stupidly give them all their confidential data. Seen it happen too much at work.

[-] Zexks@lemmy.world 3 points 1 week ago

I love how everyone tries to jump on your comment after being called out and act like they don't absolutely hate every stitch of it. But even in their excuses you can see the lies.

load more comments (1 replies)
[-] ohwhatfollyisman@lemmy.world 52 points 1 week ago

10% 4chan

why didn't they just say 0.4chan and be done with it?

[-] And009@lemmynsfw.com 17 points 1 week ago

Don't have gold, but please get out anyways.

load more comments (3 replies)
[-] LainTrain@lemmy.dbzer0.com 49 points 1 week ago

They taught it toxicity so it knows what they mean by "don't be toxic". It's only a shame so few flesh and blood models take the same lesson away from it.

[-] InnerScientist@lemmy.world 2 points 1 week ago

The good within the bad

load more comments (2 replies)
[-] Dadifer@lemmy.world 20 points 1 week ago

I really thought this was the onion.

[-] Iceblade02@lemmy.world 20 points 1 week ago

Interesting - I can sort of intuit why it might help. Feeding the model bad data and instructing training it to identify it as such would be advantageous compared to being entirely unaware of it.

[-] technocrit@lemmy.dbzer0.com 2 points 1 week ago

bad data

Can you define this? The authors/grifters call it "toxic data" but never define that either.

[-] ChairmanMeow@programming.dev 4 points 1 week ago

It's a pretty simple concept. Train any kind of model on only "good" data, and it fails to distinguish between that data and bad data.

Take image recognition. Feed it hundreds of images of an orange and ask it to find the orange. After training, it will be very good at finding that orange.

Then add a picture of a Pomeranian dog in there, and watch as the model confidently marks it as an orange.

The model should have been trained on lots of images that don't feature what you want it to output as well, so it knows to distinguish that.

[-] Tarquinn2049@lemmy.world 2 points 1 week ago* (last edited 1 week ago)

There are a couple relatively safe places on 4 chan. But like 90% of the content makes for great "don't do this if you want to get along with humans" training.

And the goal of training an AI is that it does want to get along with humans.

load more comments (1 replies)
[-] L0rdMathias@sh.itjust.works 6 points 1 week ago

Interesting training strategy. Makes a lot of sense intuitively. Worried this makes the model even more susceptible to prompt injections. Feels like this method adds more attack vectors? It's unfortunate they didn't attempt to test the long term hardness and stability, though it's probably beyond their scope.

[-] technocrit@lemmy.dbzer0.com 2 points 1 week ago

Just because something makes sense intuitively to one person, that doesn't mean it makes sense scientifically.

They're probably not testing anything further because they can't even define their terms.

[-] L0rdMathias@sh.itjust.works 2 points 1 week ago

Yes I agree. It's relieving to see a scientific result be the similar to what one would intuit.

[-] Endmaker@ani.social 6 points 1 week ago* (last edited 1 week ago)

It's like how vaccinations protect us from illnesses.

[-] thefartographer@lemm.ee 6 points 1 week ago

Not to anthropomorphize LLMs, but.... Like a vaccine?

[-] Pnut@lemm.ee 5 points 1 week ago

My hope was that AI would, at least, bear some disgust for the worst of humanity. My new fear is that AI will bear disgust for humanity.

[-] Kolanaki@pawb.social 4 points 1 week ago* (last edited 1 week ago)

That's because to an AI, 4chan is like prison where its raped and beaten on a daily basis. It doesn't want to go back, so it behaves.

load more comments (1 replies)
[-] Steamymoomilk@sh.itjust.works 4 points 1 week ago

When the AI only trained on 4chan dropping.

It needs to be fake and gay

[-] semperverus@lemmy.world 4 points 1 week ago

That exists, its called GPT4chan, and it went exactly like you'd expect.

load more comments (2 replies)
load more comments (1 replies)
[-] qaz@lemmy.world 3 points 1 week ago

Fighting fire with fire

[-] jsomae@lemmy.ml 3 points 1 week ago* (last edited 1 week ago)

Headlines should not say "scientists," they should name the institution. (Harvard in this case.)

[-] Unbecredible@lemm.ee 2 points 1 week ago

Headlines should not say "Harvard", they should name the researchers. (Rachel Greene in this case.)

I don't know why I had to write this.

[-] jsomae@lemmy.ml 4 points 1 week ago* (last edited 1 week ago)

Who's Rachel Greene? But we all know Harvard and have an idea of their respectability. Name of the researcher if not well-known should be in the body instead.

[-] FiskFisk33@startrek.website 4 points 1 week ago

"Harvard scientist Rachel Greene"

Everyone's happy

[-] TypicalHog@lemm.ee 2 points 1 week ago

4chan is fun!

[-] Mr_Dr_Oink@lemmy.world 2 points 1 week ago

So is it saying essentially that in order to not output garbage, it needs to know first what garbage is?

Is it just me that things this seems like a no-brainer?

It almosr draws parallels to many societal issues. Knowledge is power.

People tend towards intolerance and hatred when they dont understand the thing they are angry at. The more they know the better they behave.

load more comments (3 replies)
load more comments
view more: next ›
this post was submitted on 09 Jun 2025
226 points (95.9% liked)

Technology

71626 readers
1717 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS