553
George R.R. Martin and other authors sue OpenAI for copyright infringement
(www.theverge.com)
This is a most excellent place for technology news and articles.
I've expressed my opinions on this before which wasn't popular, but I think this case is going to get thrown out. Authors Guild, Inc. v. Google, Inc. has established the precedent that digitalization of copyrighted work is considered fair use, and finetuning an LLM even more so, because LLMs ultimately can be thought of as storing text data in a very, very lossy comprssion algorithm, and you can't exactly copyright JPEG noise.
And I don't think many of the writers or studio people actually tried to use ChatGPT to do creative writing, and so they think it magically outputs perfect scripts just by telling it to write a script: the reality is if you give it a simple prompt, it generates the blandest, most uninspired, badly paced textural garbage imaginable (LLMs are also really terrible at jokes), and you have to spend so much time prompt engineering just to get it to write something passable that it's often easier just to write it yourself.
So, the current law on it is fine I think, that pure AI generated contents are uncopyright-able, and NOBODY can make money off them without significant human contribution.
Which is not too far from the typical sequel quality coming out of hollywood at the moment ;-)
Well, nobody really wants to ever put their name on something they're not proud of, right?
But when the goals is to churn out as much "content" as fast as possible to fill out streaming services on impossible deadlines on threat of unemployment for writers, of course writing quality will suffer.
Unhappy, overworked and underpaid people will understandably deliver poor work, which is why the strike is necessary for things to change.
This will definitely change though. As LLMs get better and develop new emergent properties, the gap between a human written story and an ai generated one will inevitably diminish.
Of course you will still need to provide a detailed and concrete input, so that the model can provide you with the most accurate result.
I feel like many people subscribe to a sort of human superiority complex, that is unjustified and which will quite certainly get stomped in the coming decades.
That is definitely not inevitable. It could very well be that we reach a point of diminishing returns soon. I'm not convinced, that the simplistic construction of current generation machine learning can go much further than it already has without significant changes in strategy.
Could be, but the chances of that happening are next to zero and it'd be foolish to assume this is the peak.