OpenAI has recently unveiled an innovative AI technology named Voice Engine that has the capability to clone anyone’s voice based on just a 15-second audio sample. The San Francisco-based company acknowledges the potential risks associated with such a tool, especially in an election year, where the misuse of voice-cloning technology could lead to misinformation and deception.
In response to these concerns, OpenAI has decided to exercise caution and ensure that safeguards are in place before any wider release of the Voice Engine. The company is collaborating with various stakeholders from government, media, entertainment, education, and civil society to gather feedback and address the potential misuse of synthetic voice technology.
With the rise of AI-powered applications, the fear of audio fakes has become more pronounced, as voice cloning tools have become more accessible and harder to trace. OpenAI’s approach of taking a cautious and informed stance towards the broader release of Voice Engine demonstrates their commitment to preventing the misuse of this technology.
By engaging with a diverse group of partners and stakeholders, OpenAI aims to create a more secure environment for deploying the Voice Engine in the future. The company’s emphasis on incorporating feedback and building safeguards indicates a responsible approach to addressing the risks associated with voice-cloning technology.