327
submitted 10 months ago by some_guy@lemmy.sdf.org to c/news@lemmy.world

Millions of articles from The New York Times were used to train chatbots that now compete with it, the lawsuit said.

top 50 comments
sorted by: hot top controversial new old
[-] spaduf@slrpnk.net 57 points 10 months ago

The existing industry that's popped up around LLMs has conveniently ignored that what these models are doing may have been illegal the whole time and a lot of the experts knew it. This is why it's so important for folks to realize that the industry is not just thin wrappers around ChatGPT (and that interesting applications of this technology are largely being pushed out by the lowest hanging fruit). If this is ruled as not fair use then the whole industry will basically disappear overnight and we'll have to rebuild it from scratch either with a new business model that pays authors or as open source/crowd sourced models (probably both). All that said we're almost certainly better off. Open AI may have kicked off the most recent "gold rush" but their methods have been terrible for both the industry at large and for further development of the tech.

[-] Kushia@lemmy.ml 17 points 10 months ago

It always should have had the right business model where they paid for this access for AI training. They knew it was wrong but in their rush to be known they decided it was better to take without asking and then ask for forgiveness later. Regardless what happens now, people have already made a name for themselves swindling the likes of Microsoft out of it and will have long well-paying careers from it.

[-] gedaliyah@lemmy.world 8 points 10 months ago

It seems like it was almost necessary to go through this phase for the sake of developing the tech. Doesn't a lot of CS research uses web crawling algorithms to gather data without identifying that the information is licensed for such use? What about the fediverse? it remains unclear what the copyright and licensing will be should it come into question. There is no EULA to access fedi, just a set of open protocols.

[-] GarlicToast@programming.dev 4 points 10 months ago

Testing an algorithm for a paper with releasing the weights/data is not the same as selling the output of the algorithm.

load more comments (2 replies)
[-] Blue_Morpho@lemmy.world 3 points 10 months ago

I seem to remember NYT suing Google years ago for effectively the same thing. Google copies all NYT articles into it's index, then sells ads for people to search for that copyrighted information.

[-] EnderMB@lemmy.world 3 points 10 months ago

These models can still be trained on data that they're allowed to use, but I think that what we're seeing is that the better LLM services are probably trained with shocking amounts of private data, whereas the less performant probably don't use stolen data.

load more comments (1 replies)
[-] Blue_Morpho@lemmy.world 2 points 10 months ago

It certainly seems illegal, but if it is, then all search engines are too. They do the same thing. Search engines copy everything to their internal servers, index it, then sell access to that copyrighted data (via ads and other indirect revenue generators).

load more comments (6 replies)
[-] yamanii@lemmy.world 1 points 10 months ago

If the companies can profit of stealing work and charging access for it, why can't I just pirate it myself without making anyone richer?

load more comments (15 replies)
[-] GarlicToast@programming.dev 12 points 10 months ago

OpenAI need to be nuked for that, just as Microsoft need to get nuked for training CoPilot over GPL code.

[-] whoisearth@lemmy.ca 5 points 10 months ago

I have to say it's fun to watch. I'm bringing this up with my boss when he's back because all fortune 500 companies are big on both products right now and from a technology perspective and a business edge with their competitors it makes sense.

For me I care more from a philosophical and moral perspective and I'm curious with our "AI Steering Committee" how seriously they're taking into account the actions of these companies. Microsoft is one thing as they're so embedded but OpenAI? How long does a company wanting to be perceived as "good" going to continue using ChatGPT?

I don't have answers. Genuinely curious.

[-] GarlicToast@programming.dev 3 points 10 months ago

IMO, we may get anther AI winter if things blow out legally.

[-] whoisearth@lemmy.ca 4 points 10 months ago

If we continue to run into issues with AI and copywrite laws maybe copywrite laws are the issue. Maybe our broken system is keeping us held back.

[-] GarlicToast@programming.dev 3 points 10 months ago

I'm sure that Wine decelopers would be thrilled to be allowed to use leaked Windows code. I have a funny feeling that Microsoft may object.

[-] whoisearth@lemmy.ca 2 points 10 months ago

Those with the power want to keep the power. The pattern is consistent be it Microsoft, Paramount or John Grisham.

Now the question is, how do we abolish antiquated copywrite laws while also ensuring people are adequately compensated for what they create?

Off the top, Microsoft and Paramount don't create. They're not people. They shouldn't be in the conversation and they have no rights (yes I know not reality but I believe this). John Grisham has a leg to stand on.

I don't know what the solution is. I merely know the current solutions we have in place don't work but we continue to use them because those in power are benefitting from it.

[-] Blue_Morpho@lemmy.world 4 points 10 months ago

On the one hand it should be a copyright violation but if it is then Google search, and all search engines are too.

The only reason you can search for an article and get a hit is Google already read the page and copied it all to it's internal servers where everything is indexed. So when you search, Google can look up the keywords and provide you a link.

If there was a bug in Google's search engine like OpenAI's, you could craft a query that would leak Google's indexed data.

So all search engines are the same copyright violators as OpenAI. They take data from everyone and profit from it.(even if it is indirect or paying salaries)

[-] Black_Gulaman@lemmy.dbzer0.com 4 points 10 months ago

This is how I understand it too.

[-] yamanii@lemmy.world 2 points 10 months ago

The google search doesn't summarize the article for me so I have no reason to ever visit the site though.

[-] GarlicToast@programming.dev 2 points 10 months ago

Google is directing me to NYT, which make revenue for both parties. OpenAI does not direct me to the NYT, they try to replace them, this is a parasitic relation. If you hacked Google to pull the article from their cache, you will go to jail.

[-] Blue_Morpho@lemmy.world 1 points 10 months ago

If you hacked Google to pull the article from their cache, you will go to jail.

Google has a "preview" button which shows the article without clicking the link.

Is crafting a query to show an article "hacking"? Does that make the OpenAI researcher who got chatgpt to show an article a hacker?

[-] GarlicToast@programming.dev 1 points 10 months ago

I think we see a different Google inteface, have no preview button. It vanished years ago.

It appears that NYT has an agreement with Google. They were a bad example.

load more comments (1 replies)
[-] gedaliyah@lemmy.world 12 points 10 months ago

Digital computer-aided plagiarism is new ground for copyright. Google has successfully defended Google Books with the defense that it is searching an archive of legally purchased and licensed books for specific information without reproducing the entire work. It's the equivalent of visiting a physical library or bookstore and flipping through a book without actually purchasing it.

AI is something else entirely. It's more like a program that incorporates ALL of the text (training data) and alters it according to an algorithm. This has been a problem with news-crawling websites for a long time. They would download copywritten text, edit multiple sources together, or use an algorithm to replace common words, etc., then post it on their own ad (and often virus) filled sites. It seems like AI is just a more sophisticated version of that. In any case, I'm not a lawyer so who knows what the argument will be on one side or the other.

[-] echindod@programming.dev 2 points 10 months ago

I'm glad you bring up Google Books in this. Those lawsuits in the early teens about this issue are really important. But two things bother me: Google really won the case, but then basically abandoned the project. It's still there, but a shell of what it used to be. I wonder if the case may be, even though they won, they really lost. Or it could be Google just abandoning another project because they never cared about it.

I think AI for searching books like Google books would be an a amazing use case, and really, it is t that much different than what Google books is: an index of all of the published words. In fact, I can imagine AI being able to help you figure out if this book has the info you actually need from the book. That's not what GPT is, but one could make one that could do it.

I am torn. I am sort of a GPT may sayer, but on the other hand, is it really all that philosophically different than what humans do? I don't think it is materially different, but it is a little.

[-] HootinNHollerin@slrpnk.net 11 points 10 months ago

FYI The Times is different than The New York Times

[-] mryessir@lemmy.sdf.org 7 points 10 months ago* (last edited 10 months ago)

I like this opinion: https://nyti.ms/3RGq2Iq#permid=130059838

Edit: I am referring to Bens reply.

[-] kromem@lemmy.world 4 points 10 months ago

The thing is these are two separate arguments.

One is whether or not training is infringement.

The other is whether or not there needs to be stricter filters on output to avoid copyright.

The second one is easy to both argue for and implement, it just means funneling money towards a fine tuned RAG model to detect infringement before spitting out content. I'd expect we'll be seeing that in the near future. It's similar to the arguments YouTube was doomed at its acquisition because of rampant copyright infringement, but they just created a tagging system and now people complain about over-zealous DCMA enforcement - generative AI will end up in the same place with the same results for cloud-based models.

The first is much more murky, and I'm extremely skeptical that the suits regarding it will be successful given the degree of transformation and the relative scope of the material in any given suit compared to the total training set. As well the purpose of the laws in the first place were to encourage creation, and setting back arguably the most powerful creative tool in history (particularly when it means likely being eclipsed by other nation states with different attitudes towards IP) doesn't seem all that encouraging.

If I were putting money on it, we'll see multiple rulings against training as infringement which will settle the issue, but we will see "copyright detection as a service" models pretty much everywhere for a short period until suddenly the use of generative AI by creatives is so widespread that its being unable to be copyrighted means business models shift from media as a product to a service.

[-] assassin_aragorn@lemmy.world 2 points 10 months ago

There is clearly value in a trained AI that an untrained model lacks, otherwise you could sell them as a product or service for the same price. That training has value, and price difference between a trained and untrained model is that value.

Because training has a value, the training material has value as well. You can't commercially extract value from someone's product to make your own product and sell it, unless you buy their product wholesale or through a license.

And it they argue that paying would be financially prohibitive to training, they admit that the training has financial value. It'd be cheap if the training material wasn't valuable.

I see two likely paths here for the future, presuming the court rules in favor of the NYT. The first is that AI companies work out a deal with publishers and media companies to use their work while not breaking the bank. The second is that AI companies don't change the training process, but they change their financial model -- if the AI is free to the public, they aren't making money off of anyone's work. They'd have to charge for ads or something.

[-] kromem@lemmy.world 6 points 10 months ago

Spaceballs extracts almost all of its value from Star Wars without paying for it.

You absolutely can extract value from things when the way in which you do it is fair use.

Which is typically considered to be use that is transformative enough so as to not simply be derivative, or in the public interest.

And I think you'd have a very difficult time showing LLMs general use to be derivative of any specific part of the training data.

We'll see soon, as these court cases resolve.

And if the cases find in favor of the plaintiffs, "not charging" isn't going to work out. You can't copy material and not charge for it and get away with it. If there's prior law that training is infringement, it's unlikely the decision will be worded so narrowly that similar cases against companies that don't charge will be found not to be infringement.

Keep in mind one of the pending cases is against Meta, whose model is completely free to access and use.

[-] assassin_aragorn@lemmy.world 3 points 10 months ago

Just want to say this is great food for thought. Its going to take me time to mull over it

[-] mryessir@lemmy.sdf.org 2 points 10 months ago

I agree. Both your comments were exciting views to read. Thanks!

[-] kibiz0r@lemmy.world 1 points 10 months ago

In a perfect world, yes, I think AIs can and should be trained on real world content, but if those AIs still don’t understand the nuances of attribution, paraphrasing, and plagiarism, then that’s still a problem that needs to be addressed.

What a joke. Oh okay, if the LLMs output can annotate where the snippets came from, then it's totally cool.

The fuck are we doing? We're really sleepwalking into a future where a few companies are able to slurp up the entire history of human creative thought, crunch some statistics about it with the help of severely underpaid Kenyans, and put a paywall around it, and that's totally legal.

Every time I see an "AI" (these are not fucking AI, and yet we're fucking doomed already) apologist, I always think of Peter Gibbons explaining the "fractions of a penny" scheme. https://www.youtube.com/watch?v=yZjCQ3T5yXo

"It becomes ours"

Are we really this dumb? Maybe we deserve the dystopia we're building.

[-] brbposting@sh.itjust.works 1 points 10 months ago

I get it. Can seem alarming, and I won’t argue here about training on copyrighted works.

a few companies are able to slurp up the entire history of human creative thought, crunch some statistics about it with the help of severely underpaid Kenyans, and put a paywall around it, and that's totally legal.

If a few companies can slurp up our entire public domain history and profitably paywall useful products of it, have there still been moral failings?

[-] Blue_Morpho@lemmy.world 1 points 10 months ago

We’re really sleepwalking into a future where a few companies are able to slurp up the entire history of human creative thought, crunch some statistics about it with the help of severely underpaid Kenyans, and put a paywall around it, and that’s totally legal.

That future already happened ten years ago when NYT lost its lawsuits against Google.

[-] MilitantAtheist@lemmy.world 7 points 10 months ago

Everyone going nuts over ai being trained on copyrighted works. No one cares about how Spotify launched with warez released mp3s.

[-] neurogenesis@lemmy.dbzer0.com 5 points 10 months ago

Good luck bro, next go after Facebook and Stability.AI and Mixtral and.... Uhh

load more comments
view more: next ›
this post was submitted on 27 Dec 2023
327 points (96.3% liked)

News

23311 readers
2019 users here now

Welcome to the News community!

Rules:

1. Be civil


Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.


Obvious right or left wing sources will be removed at the mods discretion. We have an actively updated blocklist, which you can see here: https://lemmy.world/post/2246130 if you feel like any website is missing, contact the mods. Supporting links can be added in comments or posted seperately but not to the post body.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source.


Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.


5. Only recent news is allowed.


Posts must be news from the most recent 30 days.


6. All posts must be news articles.


No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.


7. No duplicate posts.


If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.


Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners.


The auto mod will contact you if a link shortener is detected, please delete your post if they are right.


10. Don't copy entire article in your post body


For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

founded 1 year ago
MODERATORS