194
you are viewing a single comment's thread
view the rest of the comments
[-] Gaywallet@beehaw.org 55 points 1 year ago

Not a strong case for NYT, but I've long believed that AI is vulnerable to copyright law and likely the only thing to stop/slow it's progression. Given the major issues with all AI and how inequitable and bigoted they are and their increasing use, I'm hoping this helps to start conversations about limiting the scope of AI or application.

[-] FlashMobOfOne@beehaw.org 40 points 1 year ago

It's pretty apparent that AI developers are training their applications using stolen images and data.

This was always going to end up in the courts.

[-] teawrecks@sopuli.xyz 19 points 1 year ago

A human brain is just the summation of all the content it's ever witnessed, though, both paid and unpaid. There's no such thing as artwork that is completely 100% original, everything is inspired by something else we're already familiar with. Otherwise viewers of the art would just interpret it as random noise. There has to be some amount of familiarity for a viewer to identify with it.

So if someone builds an atom-perfect artificial brain from scratch, sticks it in a body, and shows it around the world, should we expect the creator to pay licensing fees to the owners of everything it looks at?

[-] knotthatone@lemmy.one 3 points 1 year ago

So if someone builds an atom-perfect artificial brain from scratch, sticks it in a body, and shows it around the world, should we expect the creator to pay licensing fees to the owners of everything it looks at?

That's unrelated to an LLM. An LLM is not a synthetic human brain. It's a computer program and sets of statistical data points from large amounts of training data to generate outputs from prompts.

If we get real general-purpose AI some day in the future, then we'll need to answer those sorts of questions. But that's not what we have today.

[-] teawrecks@sopuli.xyz 5 points 1 year ago

The discussion is about law surrounding AI, not LLMs specifically. No we don't have an AGI today (that we know of), but assuming we will, we will probably still have the laws we write today. So regardless of when it happens, we should be discussing and writing laws today under the assumption it will eventually happen.

[-] knotthatone@lemmy.one 1 points 1 year ago

This thread is about ChatGPT, an LLM. It is not a general purpose AI.

[-] teawrecks@sopuli.xyz 4 points 1 year ago

I'm saying the discussion about AI law. You can't responsibly have a discussion about law around LLMs without considering how it would affect future, sufficiently advanced AI.

load more comments (32 replies)
load more comments (32 replies)
load more comments (50 replies)
this post was submitted on 17 Aug 2023
194 points (100.0% liked)

Technology

37720 readers
481 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS