496
OpenAI being Sued for "Stealing" Peoples Content Online
(www.firstpost.com)
1. Posts must be related to the discussion of digital piracy
2. Don't request invites, trade, sell, or self-promote
3. Don't request or link to specific pirated titles, including DMs
4. Don't submit low-quality posts, be entitled, or harass others
📜 c/Piracy Wiki (Community Edition):
💰 Please help cover server costs.
Ko-fi | Liberapay |
Couple things:
While I appreciate this gentleman's copywrite experience, I do have a couple comments:
his analysis seems primarily focused from a law perspective. While I don't doubt there is legal precedent for protection under copywrite law, my personal opinion is that copywrite is a capitalist conception that is dependent on an economic reality I fundamentally disagree with. Copywrite is meant to protect the livelihoods of artists, but I don't think anyone's livelihood should be dependent on having to sell labor. More often, copywrite is used to protect the financial interests of large businesses, not individual artists. The current litigation is between large media companies and OAI, and any settlement isn't likely to remunerate much more than a couple dollars to individual artists, and we can't turn back the clock to before AI could displace the jobs of artists, either.
I'm not a lawyer, but his legal argument is a little iffy to me... Unless I misunderstood something, he's resting his case on a distinction between human inspiration (i.e. creative inspiration on derivative works) and how AI functions practically (i.e. AI has no subjective "experience" so it cannot bring its own "hand" to a derivative work). I don't see this as a concrete argument, but even if I did, it is still no different than individual artists creating derivative works and crossing the line into copywrite infringement. I don't see how this argument can be blanket applied to the use of AI, rather than individual cases of someone using AI on a project that draws too much from a derivative work.
The line is even less clear when discussing LLMs as opposed to T2I or I2I models, which I believe is what is being discussed in the lawsuit against OAI. Unlike images from DeviantArt and Instagram, text datasets from sources like reddit, Wikipedia, and Twitter aren't protected under copywrite like visual media. The legal argument against the use of training data drawn from public sources is even less clear, and is even more removed to protecting the individual users and is instead a question of protecting social media sites with questionable legal claim to begin with. This is the point id expect this particular community would take issue with: I don't think reddit or Twitter should be able to claim ownership over their user's content, nor do I think anyone should be able to revoke consent over fair use just because it threatens our status quo capitalist system.
AI isn't going away anytime soon, and litigating over the ownership of the training data is only going to serve to solidify the dominant hold over our economy by a handful of large tech giants. I would rather see large AI models be nationalized, or otherwise be protected from monopolization.
I don't really have the time to look for timestamps, but he does present his arguments from many different angles. I highly recommend watching the whole thing if you can.
Aside from that, the main thing I want to address is the responsibility of these big corporations to curate the massive library of content they gather. It's entirely in their power to blacklist certain things like PII or sensitive information or hate speech, but they decided not to because it was cheaper. They took a gamble that people either wouldn't care, didn't have the resources to fight it, or would actively support their theft if it meant getting a new toy to play with.
Now that there's a chance they could lose a massive amount of money, this could deter other ai companies from flagrantly breaking the law and set a better standard that protects people's personal data. Tbh I don't really think this specific case has much ground to stand on, but it's the first step in securing more safety for people online. Imagine if the database for this ai was leaked. Imagine all of the personal data, yours and mine included, that would be available to malicious people. Imagine the damage that could cause.
They do curate the data somewhat, though it's not easy to verify if they did since they don't share their data set (likely because they expect legal challenge)
There's no evidence they have "personal data" beyond direct textual data scraped from platforms such as reddit (much of which is disembodied from other metadata). I care FAR more about data google, facebook, or microsoft has leaking than I do text written on my old reddit or twitter account, and somehow we're not wringing our hands about that data collection.
I watched most of that video, and i'm frankly not moved by much of it. The video seems primarily (if not entirely) written in response to generative image models and image data that may actually be protected under existing copywrite, unlike the textual data in question in this particular lawsuit. Despite that, I think his interpretation of "derivative work" hand waving is flimsy at best, and relies on a materialist perspective that I just can't identify with (a pragmatic framework might be more persuasive to me). A case-by-case basis of copywrite infringement of the use of AI tools is the most solid argument he makes, but I am just not persuaded that all AI is theft based on publicly accessible data being used as training data. And i just don't think copywrite law is an ideal solution to a growing problem with technological automation and ever increasing levels of productivity and stagnating levels of demand.
I'm open to being wrong, but i think copywrite law doesn't address the long-term problems introduced by AI and is instead a shortcut to maintaining a status quo destined to failure regardless.