287
top 50 comments
sorted by: hot top controversial new old
[-] flossdaily@lemmy.world 57 points 1 year ago

I cloned my own voice to prank a friend, and... Wow, it was a gut-dropping moment when I understood just how dangerous this tool is for precisely this type of scam.

It's one thing to hear about it, but to actual experience it... Terrifying.

[-] AnokLola@lemm.ee 1 points 1 year ago
[-] flossdaily@lemmy.world 4 points 1 year ago

Check out ElevenLabs.

[-] qooqie@lemmy.world 1 points 1 year ago

Mind sharing more info about the prank? Sounds like an interesting story

[-] flossdaily@lemmy.world 20 points 1 year ago

Oh, it was nothing more than just showing off the technology, really. It wasn't a committed bit.

I cloned my voice then left a voicemail that said something like: "hey buddy it's me. My car broke down and I'm at... Actually I don't know where I'm at. I walked to the gas station and borrowed this guy's phone. He said he'll give me a ride into to town if I can get him $50 bucks. Could you venmo it to him at @franks_diner? I'll get you back as soon as I can find my phone. ... By the way this is really me, definitely not a bot pretending to be me."

[-] Imgonnatrythis@sh.itjust.works 30 points 1 year ago

Do you guys remember when the T-1000 did this?

[-] remotelove@lemmy.ca 23 points 1 year ago

What's wrong with Wolfie? I can hear him barking...

[-] bionicjoey@lemmy.ca 9 points 1 year ago

Your parents are dead

[-] Hamartiogonic@sopuli.xyz 4 points 1 year ago

In Terminator 1 the T-800 made a scam call to Sarah in order to find out where she is. He deepfaked the voice of Sarah’s mother, and she fell for it.

[-] Drusas@kbin.social 24 points 1 year ago

As someone who has an uncanny ability to recognize voices, I'm skeptical about how good these really are. Of course, most people don't share that ability.

Meanwhile, I could probably be fooled by a picture.

[-] PilferJynx@lemmy.world 16 points 1 year ago

Hmm, I understand your sentiment, but how would you know. Of course you'd pick out the bad dupes but this technology is getting really good that I fear it would go unnoticed, especially if they keep the detectable ones to reinforce bias

I always thought being able to recognize voices is a common skill? Is it not?

[-] Drusas@kbin.social 1 points 1 year ago

Very much not, in my experience.

[-] Rozz@lemmy.sdf.org 5 points 1 year ago

Yeah, does a familiar voice mean a famous person or personal friend?

[-] Drusas@kbin.social 6 points 1 year ago

For me, it could be either. Some of us recognize people by their voices more than by their faces.

[-] gregoryw3@lemmy.ml 1 points 1 year ago

I don’t have examples but having listened to some samples of various Ai generated clones (the one paper had samples of I believe 10s, 30s, 1min, 5 min) and all of them progressively sounded better. The 10 second one basically sounded like a voice call whose bit rate dropped out mid word. And the voice so long as you used words that were similar in phoenix sounded pretty close. Although this is just my experience, but to you it might sound pretty bad while to me it sounded pretty reasonable if under bad audio conditions.

https://github.com/CorentinJ/Real-Time-Voice-Cloning

This is the main one I’ve seen examples of. You’ll have to find the samples yourself, I believe it was in the actual paper?

[-] cypherpunks@lemmy.ml 0 points 1 year ago

That code was state of the art (for free software) when the author first published it with his master's thesis four years ago, but it hasn't improved a lot since then and I wouldn't recommend it today. See the Heads Up section of the readme. Coqui (a free software Mozilla spinoff) is better but also is sadly still nowhere near as convincing as the proprietary stuff.

[-] gregoryw3@lemmy.ml 3 points 1 year ago

Wait it’s been 4 years? Time really goes by. Yeah with most Ai things I assumed those with more time and resources would create better models. OS Ai is at a great disadvantage when it comes to data set size and compute power.

[-] Heratiki@lemmy.ml 19 points 1 year ago

Good luck criminals. I ignore nearly every call.

[-] Catfish@lemmygrad.ml 3 points 1 year ago

Yeah but they'll call your family. A friend of mine was recently affected by this, a scammer had a clone of her voice asking for around $300 to fix their car because they got stranded in the middle of nowhere. So they call up your parents and to your mom it's like "Oh no! My baby! Of course I'll help you!" and your mom gives them $300 thinking it's you.

[-] Heratiki@lemmy.ml 2 points 1 year ago

Yeah my family knows better. I don’t call anyone either plus I’ve got all of my family on DEFCON 1 when it comes to asking for money. Had someone try and scam my mom via Facebook pretending to be my sister. I have family members contacting me ALL the time with issues with their stuff so they don’t trust anything at all.

This all stems from myself getting scammed nearly 20 years ago via email so I’ve educated everyone immensely.

[-] sramder@lemmy.world 11 points 1 year ago

Anyone know how many hours of training data it takes to build up a convincing model of someone’s voice? It was 10’s of hours when I did a bit of research a year ago… the article says social media is the likely source of training data for these scams, but that seems unlikely at this point.

[-] treefrog@lemm.ee 14 points 1 year ago

I don't remember the exact number but I did see an article recently that said it was videos on social media like you surmised.

And it was a pretty minimal amount of data needed. Definitely not tens of hours. Less than one hour iirc.

[-] Rozz@lemmy.sdf.org 4 points 1 year ago

Is it safe to assume that if you don't have any family that posts videos to Facebook/socials you are in a safer place?

[-] NeoNachtwaechter@lemmy.world 3 points 1 year ago

if you don't have any family that posts videos to Facebook/socials you are in a safer place?

You are safe only if you don't have any people at all whom you trust.

But then you are having some other problems...

[-] Sacreblew@lemmy.ca 2 points 1 year ago

Make sure to use a fake accent when talking to strangers on the phone

[-] treefrog@lemm.ee 1 points 1 year ago

I certainly am hoping so myself.

[-] sramder@lemmy.world 3 points 1 year ago

The technology has clearly come a long way in a short time, really fascinating.

I remember the first examples I read about being trained with celebrity read audiobooks because they needed so much audio data. I want to say Tom Hanks or Anthony Hopkins but I could have that confused with something else.

[-] CrabLangEnjoyer@lemmy.world 10 points 1 year ago

A current state of the art ai model from Microsoft can achieve acceptable quality with about 3 seconds of audio. Commercially available stuff like eleven labs about 30 minutes. But quality will obviously vary heavily but then again they're using a low quality phone call so maybe not that important

[-] sramder@lemmy.world 3 points 1 year ago

That’s downright scary :-) I think it took longer in the last Mission Impossible.

30 minutes is still pretty minimal for the kind of targeted attack it sounds like this is used for. I suppose we all need to work with our families on code words or something.

I went in thinking the article was a bit alarmist, but that’s clearly not the case. Thank for the insight.

[-] madsen@lemmy.world 2 points 1 year ago* (last edited 1 year ago)

With that little, they may be able to recreate the timbre of someone's voice, but speech carries a multitude of other identifiers and idiosyncrasies that they're unlikely to get with that little audio, like personal vocabulary (we don't choose the same words and phrasings for things), specific pronunciations (e.g. "library" vs "libary"), voice inflections, etc. Obviously, the more training data you have, the better the output.

[-] waylaidwanderer@lemmy.ca 1 points 1 year ago

ElevenLabs only needs 1 minute, but it also works with even shorter clips.

[-] DontMakeMoreBabies@kbin.social 5 points 1 year ago* (last edited 1 year ago)

I literally just cloned someone's voice for a presentation on AI and did it using maybe 30 total minutes of audio....

Took me about an hour and it was free. Hardest part was clipping the audio to get the 'good bits.'

The voice was absolutely convincing.

[-] theodewere@kbin.social 5 points 1 year ago

it's no wonder actors are taking an interest, given the level of tech Disney and everybody else must have access to

[-] sramder@lemmy.world 2 points 1 year ago

Wow! That’s really impressive.

[-] Even_Adder@lemmy.dbzer0.com 3 points 1 year ago

TorToiSe can work off of just three ten second clips when you're using a pre-trained model. No telling if that'll sound any good.

[-] sramder@lemmy.world 2 points 1 year ago

I’ll have to check that out, thanks for the link.

[-] Johanno@feddit.de 2 points 1 year ago

The most advanced Model I know just needs half an hour of your voice or sth.

[-] sramder@lemmy.world 5 points 1 year ago

Someone else mentioned that Microsoft has one capable of working with far less material.

But 30 minutes is definitely short enough to make this sort of scam/attack feasible in my mind.

[-] just_another_person@lemmy.world 7 points 1 year ago

Whomever is stupid enough to think that Tom Hanks is calling you personally probably needs a court appointed guardian.

[-] TheFriar@lemm.ee 52 points 1 year ago

Did you read the article? It’s talking about taking kids voices from TikTok and shit. Social media. People have been posting videos of themselves talking for years. That’s enough data to train an ai to leave a message saying, “mom, I lost my phone and I’m in trouble. I need some money.” Or something of that sort. It’s been happening for a long time. This is only making it more confincing

[-] bionicjoey@lemmy.ca 6 points 1 year ago

I'm so fucking glad that I've hardly ever had my voice and likeness posted publicly on the internet

[-] TheFriar@lemm.ee 5 points 1 year ago

Same. I managed to stay off of social media, and I was the prime age for it at every turn. MySpace came around when I was in middle school/early high school. Facebook was opened up to everyone in late high school. Instagram came around when I was in college—and when I was traveling. I’m so glad I was that super annoying kid calling everything a conspiracy to steal my likeness/steal my data…who knew my need to be a contrarian as an anarchist teen would be so helpful?

I mean…I also grew up into an anarchist adult. So I just got lucky that I found the right books and music to push me in that direction young.

[-] ChickenAndRice@sh.itjust.works 1 points 1 year ago
[-] TheFriar@lemm.ee 1 points 1 year ago

A lot of crimethinc., Emma Goldman, and adbusters in high school (adbusters isn’t a book, but it was still deep in my repertoire). From there, Hannah arendt, Chomsky, etc. in late high school/college. I also listened to a lot of anti-flag, against me!, propagandhi, strike anywhere…all of my media was very anarchist/anti govt/anti capitalist. I stood no chance lol.

And as someone who was young enough to feel angry (and justifiably so…bush/Cheney and the patriot act were all happening. I had plenty reason to be wary of spying), admittedly I was following these things and knew what was happening, but I was still just a contrarian at heart, I could yell and argue with my parents friends, but I probably sounded like an ass. I didn’t fully know how to hold these beliefs. They were more knee jerk reactions fueled by hormones and an insane set of circumstances in the world. A lot of my embarrassing memories that come to me randomly when I’m trying to fall asleep have to do with being up in arms about something I wasn’t really qualified to speak on lol

I’m sure I was more annoying than I was inspiring

[-] PlantJam@lemmy.world 3 points 1 year ago

enough data

To be clear, about three seconds of your voice is "enough".

[-] just_another_person@lemmy.world 2 points 1 year ago

The reference of the entire article is talking about scammers using AI models of voice you know and understand. None of these scam rings have the time to break it down to your family.

[-] TheFriar@lemm.ee 1 points 1 year ago* (last edited 1 year ago)

You sure? It’s very easy for these scammers to make a bot to trawl those “address/people lookup” sites, get family names and numbers, and then search for anyone in there’s public social media, and compile that footage. It wouldn’t be much work at all after creating the bot. Those creepy people lookup sites list an absurd amount of information. It would make doing this very easy. And think of how much work already goes into scams that use sheer numbers to boost likelihood of working with a basic ruse. If they can trim that list of available phone numbers down to—even if it were just 30%, or 15% of available phone numbers now with personal information and an in by imitating someone they know and love? That’s still a fuck load of people. And the likelihood of success would shoot WAY up while actually cutting down on the amount of work they’d need to do. So I’d argue you have that backwards.

[-] MargotRobbie@lemmy.world 9 points 1 year ago* (last edited 1 year ago)

Unless you actually know Tom Hanks personally and are expecting a call from him, of course.

[-] d4rknusw1ld@lemmy.world 1 points 1 year ago
load more comments
view more: next ›
this post was submitted on 10 Oct 2023
287 points (97.0% liked)

Technology

59298 readers
1818 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS