61

AI companies claim their tools couldn't exist without training on copyrighted material. It turns out, they could and it just takes more work. To prove it, AI researchers trained a model on a dataset that uses only public domain and openly licensed material.

What makes it difficult is curating the data, but once the data has been curated once, in principle everyone can use it without having to go through the painful part. So the whole "we have to violate copyright and steal intellectual property" is (as everybody already knew) total BS.

you are viewing a single comment's thread
view the rest of the comments
[-] RedWizard@hexbear.net 19 points 3 days ago

The group built an 8 TB ethically-sourced dataset.

My question is, is this dataset also Free Range or Cage Free?

[-] optissima@lemmy.ml 7 points 3 days ago

Cage-free, as hasn't been around long enough to be in publicly owned data

[-] TommyBeans@hexbear.net 4 points 3 days ago

Is 8tb even shit for data? I thought these things needed to feed on hundreds of terra-bytes of data

[-] BountifulEggnog@hexbear.net 3 points 2 days ago

It's a bit weird to refer to it in terabytes, reading the paper their biggest model was trained on 2 trillion tokens. Qwen 3 was pre trained on 36t, with post training on top of that. It's kinda fine for what it is but this absolutely contributes to its poor performance.

this post was submitted on 06 Jun 2025
61 points (100.0% liked)

technology

23815 readers
347 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS