87
submitted 9 months ago by 0x815@feddit.de to c/technology@beehaw.org

Generative artificial intelligence (GenAI) company Anthropic has claimed to a US court that using copyrighted content in large language model (LLM) training data counts as “fair use”, however.

Under US law, “fair use” permits the limited use of copyrighted material without permission, for purposes such as criticism, news reporting, teaching, and research.

In October 2023, a host of music publishers including Concord, Universal Music Group and ABKCO initiated legal action against the Amazon- and Google-backed generative AI firm Anthropic, demanding potentially millions in damages for the allegedly “systematic and widespread infringement of their copyrighted song lyrics”.

you are viewing a single comment's thread
view the rest of the comments
[-] megopie@beehaw.org 15 points 9 months ago

“Ai” as it is being marketed is less about new technical developments being utilized and more about a fait accompli.

They want mass adoption of the automated plagiarism machine learning programs by users and companies, hoping that by the time the people being plagiarized notice, it’s too late to rip it all out.

That and otherwise devalue and anonymize work done by people to reduce the bargaining power of workers.

[-] SnotFlickerman@lemmy.blahaj.zone 7 points 9 months ago* (last edited 9 months ago)

They also don't care if the open, free internet devolves into an illiterate AI generated mess, because they need an illiterate populace that isn't educated enough to question it anyway. They'll still have access to quality sources of information, while ensuring the lowest common denominator will literally have garbage information being fed to them. I mean, that was already true in the sense that the clickbait news outsold serious investigative news, and so the garbage clickbait became the norm and serious journalism is hard come by and costly.

They love increasing barriers between them and the rest of the populace, physically and mentally.

[-] sonori@beehaw.org 5 points 9 months ago

Silicon valley’s core business model has for years been to break the law so blatantly and openly while throwing money at the problem to scale that by the time law enforcement caches up to you your an “indispensable” part of the modern world. See Uber, whose own publicly published business model was for years to burn money scaling and ignoring employment law until it could drive all competitors out of business and become an illegal monopoly, thus allowing it to raise prices to the point it’s profitable.

[-] Zaktor@sopuli.xyz 1 points 9 months ago

Fucking scooters lying all over the sidewalk.

[-] Drewelite@lemmynsfw.com 3 points 9 months ago

A.I. exists. It will continue to get better. If letting people use it becomes illegal, they'll just use it themselves and cut us out. A world where the general population have access to A.I. is the only one where we're not totally fucked. I'm not simping for Google or Facebook, I'd much prefer an open source self hostable version. The only way we can stay competitive is if these companies continue to develop these in the open for the consumer market.

General purpose artificial intelligence will exist. Full stop. Intelligence is the most valuable resource in the universe. You're not going to stop it from existing, you're just going to stop them from sharing it with you.

[-] megopie@beehaw.org 4 points 9 months ago* (last edited 9 months ago)

What they have, is miles from artificial general intelligence, it is not AI in even a limited sense. It is AI in the same way a mob in a video game is AI.

Their claims to be approaching it are marketing fluff at best, and abject lies at worst.

[-] Drewelite@lemmynsfw.com 2 points 9 months ago

I think if we sit here and debate the nuances of what is or is not intelligence, we will look back on this conversation and laugh at how pedantic it was. Movies have taught us that A.I. is hyper-intelligent, conscious, has it's own objectives, is self aware, etc.. But corporations don't care about that. In fact, to a corporation, I'm sure the most annoying thing about intelligence right now is that it comes packaged with its own free will.

People laugh at what is being called A.I. because it's confidently wrong and "just complicated auto-complete". But ask your coworkers some questions. I bet it won't be long before they're confidently wrong about something and when they're right, it'll probably be them parroting something they learned. Most people's jobs are things like: organize these items on those shelves, mix these ingredients and put it in a cup, get all these numbers from this website and put them in a spreadsheet, write a press release summarizing these sources.

Corporations already have the A.I. they need. You gatekeeping intelligence is just your ego protecting you from the truth: you, or someone dear to you, are already replaceable.

I think we both know that A.I. is possible, I'm saying it's inevitable, and likely already at version 1. I'm sure any version of it would require access to training data. So the ruling here would translate. The only chance the general population has of keeping up with corporations in the ability to generate economic value, is to keep the production of A.I. in the public space.

this post was submitted on 29 Jan 2024
87 points (100.0% liked)

Technology

37717 readers
418 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS