845
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 26 Jun 2024
845 points (97.8% liked)
Technology
59436 readers
1252 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
With the obligatory "fuck everyone who disregards open source licenses", I am still slightly amused at this raising eyebrows while nearly no one is complaining about MS using github to train their copilot LLM, which will help circumvent licenses & copyrights by the bazillion.
I complain all the time. But that's not the subject of this post...
Yeah exactly, fuck llms that don't honor licenses
Lots of people complained about that. I've only seen this single thread complaining about this.
What rock have you been living under??
Came here to say this. As much as I don't like china, there is really nothing to see (apart from the source, that's for everybody to see).
This could be illegal for git repos that do not have a open source license that allows mirroring or copying (BSD, Apache, Mit, GPL, etc.) Sometimes these repos are more "source available" and the source is only allowed to be read, not redistributed or modified. I would say that this is more of a matter for each individual copyright holder, not Microsoft.
But ultimately I agree, this really isn't as big of a deal as people are making.
edit: changed some wording to be clearer
China is a sovereign entity. I'm pretty sure they can decide foreign licensing laws don't apply there.
China is a soverign state and they should make their own laws. However, China has promised repeatably that they will take IP concerns more strictly (trade deal with Trump in 2020 is one example of this promise). It seems of this moment they still use the World Intellectual Property Organization for inspiration for their IP laws. At one point, China did not acknowledge IP rights at all but chose to acknowledge them in order to secure foreign business trade. Being consistent is good for business; especially when it comes to international business.
China has never been consistent. Doing business there is all about relations with the CCP. This is a perfect example of how an authoritarian regime differs from a liberal regime. One is bound by it's promises and rules and the other binds it's rules to it's needs.
Not like MS couldn't be sued.
It may be expensive but possible.
Unlike China. Good luck suing china (or the chinese government) as a whole. Maybe you'll get out a domestic ban but I can hardly believe that they will care and probably will continue with their operation. But now it's not on very legal grounds.
If I look at a few implementations of an algorithm and then implement my own using those as inspiration, am I breaking copyright law and circumventing licenses?
That depends on how similar your resulting algorithm is to the sources you were "inspired" by. You're probably fine if you're not copying verbatim and your code just ends up looking similar because that's how solutions are generally structured, but there absolutely are limits there.
If you're trying to rewrite something into another license, you'll need to be a lot more careful.
What's the limit? This needs to be absolutely explicit and easy to understand because this is what LLMs are doing. They take hundreds of thousands of similar algorithms and they create an amalgamation of it.
When is it copying and when it is "inspiration"? What's the line between learning and copying?
I disagree that it needs to be explicit. The current law is the fair use doctrine, which generally has more to do with the intended use than specific amounts of the text/media. The point is that humans should know where that limit is and when they've crossed it, with motive being a huge part of it.
I think machines and algorithms should have to abide by a much narrower understanding of "fair use" because they don't have motive or the ability to Intuit when they've crossed the line. So scraping copyrighted works to produce an LLM should probably generally be illegal, imo.
That said, our current copyright system is busted and desperately needs reform. We should be limiting copyright to 14 years (as in the original copyright act of 1790), with an option to explicitly extend for another 14 years. That way LLMs can scrape comment published >28 years ago with no concerns, and most content produced >14 years (esp. forums and social media where copyright extension is incredibly unlikely). That would be reasonable IMO and sidestep most of the issues people have with LLMs.
First, this conversation has little to do with fair use. Fair use is when there is an acceptable reason to break copyright. For example when you are making a parody or critique or for education purposes.
What we are talking about is the act of reading and/or learning and then using that information in order to synthesize new material. This is essentially the entire point of education. When someone goes to art school, they study many different artists and their techniques. They learn from these techniques as they merge them together in different ways to create novel art.
Everybody recognizes this is perfectly OK and to assume otherwise is absurd. So what we are talking about is not fair use, but extracting data from copyrighted material and using it to create novel material.
The distinction here is you claim when this process is automated, it should become illegal. Why?
My opinion is if it's legal for a human to do, it should be legal for a human to automate.
Sure, but that's not what LLMs are doing. They're breaking down works to reproduce portions of it in answers. Learning is about concepts, LLMs don't understand concepts, they just compare inputs with training data to provide synthesized answers.
The process a human goes through is distinctly different from the process current AI goes through. The process an AI goes through is closer to a journalist copy-pasting quotations into their article, which falls under fair use. The difference is that AI will synthesize quotations from multiple (many) sources, whereas a journalist will generally just do one at a time, but it's still the same process.
As I am a big proponent of open source, there is nothing wrong even with copying code - the point is that you should not be allowed to claim something as your own idea and definitely not to claim copyright on code that was "inspired" by someone else's work. The easiest solution would be to forbid patents on software (and patents altogether) completely. The only purpose that FOSS licenses have is to prevent corporations from monetizing the work under the license.
Well let's say there's an algorithm to find length of longest palindrome with a set of letters. I look at 20 different implementations. Some people use hashmaps, some don't. Some do it recursively, some don't. Etc
I consider all of them and create my own. I decide to implement myself both recursive and hash map but also add certain novel elements.
Am I copying code? Am I breaking copyright? Can I claim I wrote it? Or do I have to give credit to all 20 people?
As for forbidding patents on software, I agree entirely. Would be a net positive for the world. You should be able to inspect all software that runs on your computer. Of course that's a bit idealistic and pipe-dreamy.
again, I don't have a problem with copying code - but I as a developer know whether I took enough of someone else's algorithm so that I should mention the original authorship :) My only problem with circumventing licenses is when people put more restrictive licenses on plagiarized code.
And - I guess - in conclusion, if someone makes a license too free, so that putting a restrictive (commercial) license or patent on plagiarized / derived work, that is also something I don't want to see.
I have no problem copying code either. The question is at what point does it go from
To
How abstracted does it have to be before it's OK? If you write a merge sort, it might be similar to the one you learned when you were studying data structures.
Should you make sure you attribute your data structure textbook every time you write a merge sort?
Are you understanding the point I'm trying to get at?
My trivial (non legal ;) answer is: If you are working for a corporation that is looking to patent something / make something closed license: the moment you ever looked at a single line of my code relevant to what you are doing, you are forbidden from releasing under any more restrictive license. If you are a private person working on open source? Then you be the judge whether you copied enough of my code that you believe it is more than just "inspired by".
Are you just trying to make a bad pro-China argument or have you never been online before?
I see it more as a good anti-Microsoft argument 🤷🏻♀️
“Why does no one say murder is bad unless China is murdering”
Isn’t a good anti-murder argument
I can not fathom how you absolutely nailed the essence of my comment, yet misunderstood it (and - arguably - your own example) so fundamentally.
Let me try to help, once:
"Why do most people not complain about murder when Microsoft is doing it, but when China is doing it, the very justified outrage can be heard?"
❤️
I cannot fathom how you absolutely nailed the essence of my comment, yet misunderstood it (and - arguably - your own example) so fundamentally.
People do criticize Microsoft for using open source data to train LLMs, just like people criticize murder
Hence the query about having never been on the internet before