71
top 17 comments
sorted by: hot top controversial new old
[-] drmoose@lemmy.world 19 points 2 weeks ago* (last edited 2 weeks ago)

This is the notorious lawsuit from a year ago:

a group of well-known writers that includes comedian Sarah Silverman and authors Jacqueline Woodson and Ta-Nehisi Coates

The judge enforces that AI training is fair use:

But the actual process of an AI system distilling from thousands of written works to be able to produce its own passages of text qualified as “fair use” under U.S. copyright law because it was “quintessentially transformative,” Alsup wrote.

This is a second judgement of this type this week.

[-] deathmetal27@lemmy.world 12 points 2 weeks ago* (last edited 2 weeks ago)

Alsup? Is this the same judge who also presided over Oracle v. Google over the use of Java in Android? That guy really does his homework over cases he presides on, he learned how to code to see if APIs are copyrightable.

As for the ruling, I'm not in favour of AI training on copyrighted material, but I can see where the judgement is coming from. I think it's a matter of what's really copyrightable: the actual text or images or the abstract knowledge in the material. In other words, if you were to read a book and then write a summary of a section of it in your own words or orally described what you learned from the book to someone else, does that mean copyright infringement? Or if you watch a movie and then describe your favourite scenes to your friends?

Perhaps a case could be made that AI training on copyrighted materials is not the same as humans consuming the copyrighted material and therefore it should have a different provision in copyright law. I'm no lawyer, but I'd assume that current copyright law works on the basis that humans do not generally have perfect recall of the copyrighted material they consume. But then again a counter argument could be that neither does the AI due to its tendency to hallucinate sometimes. However, it still has superior recall compared to humans and perhaps could be the grounds for amending copyright law about AI training?

[-] drmoose@lemmy.world 2 points 2 weeks ago

Your last paragraph would be ideal solution in ideal world but I don't think ever like this could happen in the current political and economical structures.

First its super easy to hide all of this and enforcement would be very difficult even domestically. Second, because we're in AI race no one would ever put themselves in such disadvantage unless its real damage not economical copyright juggling.

People need to come to terms with these facts so we can address real problems rather than blow against the wind with all this whining we see on Lemmy. There are actual things we can do.

[-] faizalr@fedia.io 7 points 2 weeks ago
[-] FaceDeer@fedia.io 1 points 2 weeks ago

Any reason to say that other than that it didn't give the result you wanted?

[-] ocassionallyaduck@lemmy.world 6 points 2 weeks ago

Terrible judgement.

Turn the K value down on the model and it reproduces text near verbatim.

[-] drmoose@lemmy.world -1 points 2 weeks ago

Ah the Schrödinger's LLM - always hallucinating and also always accurate

[-] tabular@lemmy.world 3 points 2 weeks ago

"hallucination refers to the generation of plausible-sounding but factually incorrect or nonsensical information"

Is an output an hallucination when the training data involved in the output included factually incorrect data? Suppose my input is "is the would flat" and then an LLM, allegedly, accurately generates a flat-eather's writings saying it is.

[-] ocassionallyaduck@lemmy.world 0 points 2 weeks ago

There is nothing intelligent about "AI" as we call it. It parrots based on probability. If you remove the randomness value from the model, it parrots the same thing every time based on it's weights, and if the weights were trained on Harry Potter, it will consistently give you giant chunks of harry potter verbatim when prompted.

Most of the LLM services attempt to avoid this by adding arbitrary randomness values to churn the soup. But this is also inherently part of the cause of hallucinations, as the model cannot preserve a single correct response as always the right way to respond to a certain query.

LLMs are insanely "dumb", they're just lightspeed parrots. The fact that Meta and these other giant tech companies claim it's not theft because they sprinkle in some randomness is just obscuring the reality and the fact that their models are derivative of the work of organizations like the BBC and Wikipedia, while also dependent on the works of tens of thousands of authors to develop their corpus of language.

In short, there was a ethical way to train these models. But that would have been slower. And the court just basically gave them a pass on theft. Facebook would have been entirely in the clear had it not stored the books in a dataset, which in itself is insane.

I wish I knew when I was younger that stealing is wrong, unless you steal at scale. Then it's just clever business.

[-] drmoose@lemmy.world -1 points 2 weeks ago

Except that breaking copyright is not stealing and never was. Hard to believe that you'd ever see Copyright advocates on foss and decentralized networks like Lemmy - its like people had their minds hijacked because "big tech is bad".

[-] josefo@leminal.space 1 points 2 weeks ago

What name do you have for the activity of making money using someone else work or data, without their consent or giving compensation? If the tech was just tech, it wouldn't need any non consenting human input for it to work properly. This are just companies feeding on various types of data, if justice doesn't protects an author, what do you think it would happen if these same models started feeding of user data instead? Tech is good, ethics are not

[-] drmoose@lemmy.world 1 points 2 weeks ago

How do you think you're making money with your work? Did your knowledge appear from a vacuum? Ethically speaking nothing is "original creation of your own merit only" - everything we make is transformative by nature.

Either way, the talks are moot as we'll never agree on what is transformative enough to be harmful to our society unless its a direct 1:1 copy with direct goal to displace the original. But thats clearly not the case with LLMs.

[-] FaceDeer@fedia.io -1 points 2 weeks ago

The enemy is at the same time too strong and too weak.

[-] AmosBurton_ThatGuy@lemmy.ca 1 points 2 weeks ago

Grab em by the intellectual property! When you're a multi-billion dollar corporation, they just let you do it!

[-] PattyMcB@lemmy.world 0 points 2 weeks ago

It sounds like the precedent has been set

[-] pyre@lemmy.world 1 points 2 weeks ago

🏴‍☠️🦜

this post was submitted on 26 Jun 2025
71 points (97.3% liked)

Technology

72734 readers
1860 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS