60

AI companies claim their tools couldn't exist without training on copyrighted material. It turns out, they could and it just takes more work. To prove it, AI researchers trained a model on a dataset that uses only public domain and openly licensed material.

What makes it difficult is curating the data, but once the data has been curated once, in principle everyone can use it without having to go through the painful part. So the whole "we have to violate copyright and steal intellectual property" is (as everybody already knew) total BS.

top 8 comments
sorted by: hot top controversial new old
[-] robot_dog_with_gun@hexbear.net 19 points 2 days ago

but i don't like copyright laws

[-] yogthos@lemmygrad.ml 18 points 2 days ago

Indeed, intellectual property laws exist to concentrate ownership and profit in the hands of corporations, not to protect individual artists. Disney’s ruthless copyright enforcement, for instance, sharply contrasts with its own history of mining public-domain stories. Meanwhile, OpenAI scraping data at scale, it exposes the hypocrisy of a system that privileges corporate IP hoarding over collective cultural wealth. Large corporations can ignore copyright without being held to account while regular people cannot. In practice, copyright helps capitalists far more than it help individual artists.

[-] RedWizard@hexbear.net 19 points 2 days ago

The group built an 8 TB ethically-sourced dataset.

My question is, is this dataset also Free Range or Cage Free?

[-] optissima@lemmy.ml 7 points 2 days ago

Cage-free, as hasn't been around long enough to be in publicly owned data

[-] TommyBeans@hexbear.net 4 points 2 days ago

Is 8tb even shit for data? I thought these things needed to feed on hundreds of terra-bytes of data

[-] BountifulEggnog@hexbear.net 3 points 2 days ago

It's a bit weird to refer to it in terabytes, reading the paper their biggest model was trained on 2 trillion tokens. Qwen 3 was pre trained on 36t, with post training on top of that. It's kinda fine for what it is but this absolutely contributes to its poor performance.

[-] mayo_cider@hexbear.net 1 points 2 days ago

Unfortunately this doesn't really prove anything, the training requires exponentially more training data to gain any reasonable advances

You won't get ChatGPT3 with legal material

[-] yogthos@lemmygrad.ml 4 points 2 days ago

That's just a limitation of current training techniques. There's no reason to expect that new techniques can't be developed that don't require exponentially more data. In fact, we already see that simply making models bigger isn't actually helping. The research is now moving towards ideas like using reinforcement learning and neurosymbolics.

this post was submitted on 06 Jun 2025
60 points (100.0% liked)

technology

23810 readers
273 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS