Artificial intelligence (AI) is currently used for various beneficial purposes for humanity. Education, science, medicine, technology and even in the entertainment industry. However, some people have turned to AI-powered platforms for more negative purposes.
As many of you know, there are artificial intelligence programs that are used to generate voices based on text. In other words, we can write a sentence and the AI will convert it into audio, for example, with the voices of celebrities. This can be lent for misuse, like deepfakes.
A deepfake is a generated video, image, or audio that imitates the appearance and sound of a person. Also called “synthetic media,” they are so convincing at mimicking the real thing that they can fool both people and algorithms.
This practice can be borrowed for harmful content such as violent messages and some users of 4chan, a kind of Japanese Reddit, are making deepfakes with famous voices like Joe Rogan, Ben Sharpio and Emma Watsonleaving as audio results that can affect other people.
The ElevenLabs complaint
According to a report published on the website of computer todayElevenLabs, a company founded by former Google and Palantir employees specializing in AI-generated voices, denounced the use of its beta tool for these deepfakes.
“Crazy weekend. Thank you all for testing our Beta platform. Although we see our technology being overwhelmingly applied to positive use, we also see a growing number of cases of misuse of voice cloning”the company said in a tweet.
Crazy weekend – thank you to everyone for trying out our Beta platform. While we see our tech being overwhelmingly applied to positive use, we also see an increasing number of voice cloning misuse cases. We want to reach out to Twitter community for thoughts and feedback!
— ElevenLabs (@elevenlabsio) January 30, 2023
This situation not only affects ElevenLabs and the users of sites like 4chan, but also the reputation of celebrities, who could end up suing these types of platforms.
For this reason, the laboratory suggested implementing some security measures: “While we may trace any generated audio back to the user, we would like to address this by implementing additional security measures”.
These measures would be the verification of payment or identity information, copyright of the voices, among other actions.
Our current ideas:
(1) Additional account verifications to enable Voice Cloning: such as payment info or even full ID verification
(2) Verifying copyright to the voice by submitting sample with prompted text
(3) Drop Voice Lab together and manually verify each Cloning Request— ElevenLabs (@elevenlabsio) January 30, 2023