115
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 05 Jul 2023
115 points (98.3% liked)
Asklemmy
43859 readers
1706 users here now
A loosely moderated place to ask open-ended questions
Search asklemmy ๐
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 5 years ago
MODERATORS
Forums like this may die, but chat boards like Matrix, Discord, Slack will come out on top I believe.
Anything with Voice chat. I think we're still a little ways off from them being able to simulate a talking conversation in real time. The API delay with these AIs is what gives them away.
Once you have talked with someone you know they are real. As well if you really wanted to confirm people in your community are real you could do voice chat vetting.
There are already successfully convincing phone scams with AI. https://www.npr.org/2023/03/22/1165448073/voice-clones-ai-scams-ftc
This will likely get significantly easier, cheaper, and faster in the very near future. Voice generation is relatively easy. We're going to need a whole new class of captchas and shibboleths to use online, but honestly, it's such a fast-moving target that I think cutting-edge AI will forever be a step ahead. I think the best we can hope for is to have viable countermeasures for commoditized AI techniques. For now that might include logic problems (which ChatGPT and its current competitors are quite bad at) but I'm sure the big players already have more advanced language bots in development.
I reallllly hate the idea of online IDs but it might be the only way.
Convincing someone for a scam is one thing, convincing someone you're having an actually thought out conversation with inflections and emotions and logic all making sense is another.
If we get to that point the system as we know it will be over anyways.
I remember some years back there was a news story about some chatbot passing the Turing test. The researchers decided to make their chatbot impersonate a young Russian boy, which made its limitations harder to identify as non-human by the native-English-speaking test subjects. So it wasn't actually that impressive.
That will likely be the first kind of thing we'll see for an artificial voice-chatbot as well. It's a big world and many of the people I talk with on Discord (and even IRL) are not native English speakers and not from my country.
I'm not intimately familiar with the accents and speech patterns from everywhere in the world, so I'm conditioned to shrug off a lot of "strange" language. Because of this wide range of human speech patterns, I'm not confident that I could validate voices with a low enough false-positive and false-negative rate in practice.
I haven't really dug into the latest voice generation AI yet so I'm not sure how capable off-the-shelf programs are. I am familiar with the general techniques, though, and I think adding realistic inflection is within reach. I don't think it's possible to automate the entire pipeline yet, at least not with publicly available programs, but the field is advancing quickly so I can't take much solace in that.