434
Google Researchers’ Attack Prompts ChatGPT to Reveal Its Training Data
(www.404media.co)
This is a most excellent place for technology news and articles.
This argument always seemed silly to me. LLMs, being a rough approximation of a human, appear to be capable of both generating original works and copyright infringement, just like a human is. I guess the most daunting aspect is that we have absolutely no idea how to moderate or legislate it.
This isn’t even particularly surprising result. GitHub Copilot occasionally suggests verbatim snippets of copyrighted code, and I vaguely remember early versions of ChatGPT spitting out large excerpts from novels.
Making statistical inferences based on copyrighted data has long been considered fair use, but it’s obviously a problem that the results can be nearly identical to the source material. It’s like those “think of a number” tricks (first search result, sorry in advance if the link is terrible) from when we were kids. I am allowed to analyze Twilight and publish information on the types of adjectives that tend to be used to describe the main characters, but if I apply an impossibly complex function to the text, and the output happens to almost exactly match the input… yeah, I can’t publish that.
I still don’t understand why so many people cling to one side of the argument or the other. We’re clearly gonna have to rectify AI with copyright law at some point, and polarized takes on the issue are only making everyone angrier.