665
submitted 9 months ago by L4s@lemmy.world to c/technology@lemmy.world

George Carlin Estate Files Lawsuit Against Group Behind AI-Generated Stand-Up Special: ‘A Casual Theft of a Great American Artist’s Work’::George Carlin's estate has filed a lawsuit against the creators behind an AI-generated comedy special featuring a recreation of the comedian's voice.

you are viewing a single comment's thread
view the rest of the comments
[-] cubism_pitta@lemmy.world 107 points 9 months ago

If its wrong to use AI to put genitals in someone's mouth it should probably be wrong to use AI to put words in their mouth as well.

[-] TheFriar@lemm.ee 19 points 9 months ago
[-] ClamDrinker@lemmy.world 6 points 9 months ago

I agree and I get it's a funny way to put it, but in this case they started the video with a massive disclaimer that they were not Carlin and that it was AI. So it's hard to argue they were putting things in his mouth. If anything it's praiseworthy of a standard when it comes to disclosing if AI was involved, considering the hate mob revealing that attracts.

[-] CleoTheWizard@lemmy.world 11 points 9 months ago

The internet doesn’t care though. If I make fake pictures of people using their likeness and add a disclaimer, people will just repost it without the disclaimer and it will still do damage. Now whether or not we can or should stop them is another story

[-] ClamDrinker@lemmy.world 8 points 9 months ago* (last edited 9 months ago)

Completely true. But we cannot reasonably push the responsibility of the entire internet onto someone when they did their due diligence.

Like, some people post CoD footage to youtube because it looks cool, and someone else either mistakes or malicious takes that and recontextualizes it to being combat footage from active warzones to shock people. Then people start reposting that footage with a fake explanation text on top of it, furthering the misinformation cycle. Do we now blame the people sharing their CoD footage for what other people did with it? Misinformation and propaganda are something society must work together on to combat.

If it really matters, people would be out there warning people that the pictures being posted are fake. In fact, even before AI that's what happened after tragedy happens. People would post images claiming to be of what happened, only to later be confirmed as being from some other tragedy. Or how some video games have fake leaks because someone rebranded fanmade content as a leak.

Eventually it becomes common knowledge or easy to prove as being fake. Take this picture for instance:

It's been well documented that the bottom image is fake, and as such anyone can now find out what was covered up. It's up to society to speak up when the damage is too great.

this post was submitted on 26 Jan 2024
665 points (97.7% liked)

Technology

59080 readers
3219 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS