677
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 22 Aug 2023
677 points (95.6% liked)
Technology
60084 readers
3109 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
If I memorize the text of Harry Potter, my brain does not thereby become a copyright infringement.
A copyright infringement only occurs if I then reproduce that text, e.g. by writing it down or reciting it in a public performance.
Training an LLM from a corpus that includes a piece of copyrighted material does not necessarily produce a work that is legally a derivative work of that copyrighted material. The copyright status of that LLM's "brain" has not yet been adjudicated by any court anywhere.
If the developers have taken steps to ensure that the LLM cannot recite copyrighted material, that should count in their favor, not against them. Calling it "hiding" is backwards.
Let's not pretend that LLMs are like people where you'd read a bunch of books and draw inspiration from them. An LLM does not think nor does it have an actual creative process like we do. It should still be a breach of copyright.
... you're getting into philosophical territory here. The plain fact is that LLMs generate cohesive text that is original and doesn't occur in their training sets, and it's very hard if not impossible to get them to quote back copyrighted source material to you verbatim. Whether you want to call that "creativity" or not is up to you, but it certainly seems to disqualify the notion that LLMs commit copyright infringement.
If Google took samples from millions of different songs that were under copyright and created a website that allowed users to mix them together into new songs, they would be sued into oblivion before you could say "unauthorized reproduction."
You simply cannot compare one single person memorizing a book to corporations feeding literally millions of pieces of copyrighted material into a blender and acting like the resulting sausage is fine because "only a few rats fell into the vat, what's the big deal"
Terrible analogy.
Which one? And why exactly?
The analogy talks about mixing samples of music together to make new music, but that's not what is happening in real life.
The computers learn human language from the source material, but they are not referencing the source material when creating responses. They create new, original responses which do not appear in any of the source material.