376
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 09 Aug 2023
376 points (100.0% liked)
Technology
37799 readers
236 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
Two points:
These AIs can't do that; they need thousands or millions of repetitions to "learn" the movie, and every time they "replay" the movie it is different from the original.
"learning by rote" is something fleshbags can do, and are actually required to by most education systems.
So either humans have been breaking the copyright all this time, or the machines aren't breaking it either.
You have one brain. You could have as many instances of AI as you can afford. In a general sense, it’s different, and acting like it’s not is going to hit you like a freight train if you don’t prepare for it.
That's a different goalpost. I get the difference between 8 billion brains, and 8 billion instances of the same AI. That has nothing to do with whether there is a difference in copyright infringement, though.
If you want another goalpost, that IMHO is more interesting: let's discuss the difference between 8 billion brains with up to 100 years life experience each, vs. just a million copies of an AI with the experience of all human knowledge each.
(That's still not really what's happening, which is tending more towards several billion copies of AIs with vast slices of human knowledge each).
It’s all theoretical at this stage, but like everything else that society waits until it’s too late for, I think it’s reasonable to be cautious and not just let AI go unregulated.
It's not reasonable to regulate stuff before it gets developed. Regulation means establishing some limits and controls on something, which can't be reasonably defined before that "something" even exists, much less tested or decided whether the regulation has whatever desired effects it intends.
For what is worth, a "theoretical regulation" already exists: it's the Asimov's Rules of Robotics. Turns out current AIs are not robots, and that regulation is nonsense when applied to stable diffusion or LLMs.
I disagree. Over the last twenty years or so we have plenty examples of things they should have been regulated from the start that weren’t, and now it’s very difficult to do so. Every “gig economy” business for example.
Well fleshbags have to pay several years worth of salary to get their education, so by your comparison, Google's AI should too.
Imagine thinking Public Education doesn't count. Or that no one without a college degree ever invented anything useful. That's before we get to your notion of "College SHOULD be expensive, for everyone, always".
The problem with education is NOT that some people pay less for theirs, or nothing at all, nor that some even have the audacity to learn quickly. AI could help everyone to have a chance to learn cheaply, even quickly.
You're just off on your own little rant now, arguing points I never even implied.
That's wrong on so many levels:
So just because fleshbags are really bad at learning, does not mean Google's AI has to pay for the same shortcomings, they already pay for their own.