Meet Microsoft’s new AI tool that can mimic your voice in three seconds – Times of India

Microsoft Recently released VALL-E – An Artificial Intelligence (AI) tool that can replicate people’s voices with 3-second samples. The recordings claim to replicate any sound, including the speaker’s emotions and tone of voice.
According to a report in Windows Central, the AI ​​tool was trained on 60,000 hours of English speech data and short clips of specific voices to generate the content. The report also states that while some of the recordings sound natural, others sound like they were created by a robot or machine.
It is also reported that if supplied with a larger sample set, VALL-E may be able to produce more realistic samples.

Implications of VALL-E
While VALL-E has many positive use cases, such as in the production industry, it also poses a danger. For example, people can use VALL-E to spam calls to defraud genuine and unknowing users. Politicians or people of social influence can also be impersonated. It also poses a security threat in cases where voice passwords are required.
In addition, VALL-E can also leave voice artists working in movies and audiobooks unemployed. It’s a relief that VALL-E isn’t generally available yet, which is a good thing. Microsoft has an ethics statement on the use of VALL-E.
concerns related to chatgpt
Similar job-related concerns were expressed shortly after OpenAI’s ChatGPT became an overnight sensation after its launch last year.

recently, Chester WisniewskiPrincipal Research Scientist sophos That said, as ChatGPT continues to charm the online world, we cannot ignore the security aspect of it.
“ChatGPT is an interesting experiment at the moment, but its widespread availability certainly introduces new challenges. I have been playing with it since its public availability in November of 2022 and have been able to create very solid phishing lures and respond interactively It’s easy enough to explain this to aid giving that could lead to romance scams and commercial email compromise attacks. It seems OpenAI is trying to limit high-risk activities from abusing it, but the cat’s out of the bag now. is out of,” Wisniewski said.
“The greatest risk today is to the English-speaking population, but it will be a matter of time before there are tools available to generate reliable text in most of the world’s spoken languages. We have reached a point where human are unlikely to be able to recognize human-to-machine generated prose written in casual conversation with people we are not familiar with, which would security filter humans to help keep them from falling prey,” the scientist said.

iQoo 11: First glimpse of ‘India’s fastest’ smartphone