184
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 16 Feb 2024
184 points (96.0% liked)
Privacy
31601 readers
561 users here now
A place to discuss privacy and freedom in the digital world.
Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.
In this community everyone is welcome to post links and discuss topics related to privacy.
Some Rules
- Posting a link to a website containing tracking isn't great, if contents of the website are behind a paywall maybe copy them into the post
- Don't promote proprietary software
- Try to keep things on topic
- If you have a question, please try searching for previous discussions, maybe it has already been answered
- Reposts are fine, but should have at least a couple of weeks in between so that the post can reach a new audience
- Be nice :)
Related communities
Chat rooms
-
[Matrix/Element]Dead
much thanks to @gary_host_laptop for the logo design :)
founded 5 years ago
MODERATORS
It would be worth finding out more about how exactly the training process works, namely whether or not the AI company stores the training audio clips after training has been completed. If not, then I would say you don't have anything to worry about, because the model itself can't be used to clone your voice to any useful extent. Deep neural networks aren't reversible like that. Even if they were, it's not just trained on you, it's trained on hundreds of thousands of people then fine-tuned to you.
If they do store the clips though, then maybe show them this article about GitHub to prove to them that there is precedence for private companies using people's data to train AI without their explicit consent.
To expound on this, AI models are extremely narrow in scope. One which reproduces audio it is trained on is entirely different from one that understands what is being said. As Mr. Turkalino mentioned, the transcription AIs are built on a combination of speech recognition and incredibly specialized text data that is narrowly defined by your industry (medical in this case). In fact, they may have tuned specific models for separate disciplines. This included thousands of documents ranging from textbooks to scholarly journals along with thousands of recordings of professionals saying the words in a variety of accents and dialects so it can understand the difference between very important and very different sounding words, my wife is pregnant, so amnioitis and amniocentesis come to mind. They are close enough sounding that a general model might mistake them, and that being transcribed wrong could spell real problems when others may look at the patients chart if there are complications.
Also, most models are run in the cloud because the calculations can he very taxing. I run Stable Diffusion and other AIs locally on my beast of a machine and it struggles at times. Realistically, the cloud machines are just bugger than you can get as a desktop. Also, under the most ideal circumstances, the audio of your notes does not live in the servers, it is transmitted, stored on a virtual machine (VM) while it is being processed, then after the results are completed the VM is destroyed and the audio recording goes with it. Nothing is kept. Of course, that is where you need to be sure to do the work, making sure that your situation is "ideal". One of the biggest controversies in with AI right now is that data is being stored for doing reinforcement training on the AI models. Example, you send your recordings and the AI returns the transcript. You mark any corrections and go on with your day. The company takes those recordings and feeds them back into the general model with the corrections you made and tries to tell the AI what it got wrong. You are going to want to be sure that you are allowed to opt-out of your data being allowed to be used as training data (beyond the fine-tuning to help it learn your voice).