OpenAI has unveiled a new AI model that has sparked concerns among users due to its ability to mimic voices with stunning accuracy. The tool allows users to generate speech in their own voice, even in different languages, raising serious ethical and security considerations.
While this technology is not yet available to the public, OpenAI is carefully evaluating how to proceed and is engaging with a wide range of stakeholders to gather feedback and address potential risks. The company acknowledges the serious implications of generating lifelike voices, particularly in an election year, and is working to ensure responsible deployment of the technology.
OpenAI is currently using the tool to enhance features in its ChatGPT platform and text-to-speech API. The company has also been collaborating with select partners to test Voice Engine for various applications, such as children’s educational materials, language translation, and medical voice recovery.
To mitigate risks associated with synthetic voices, OpenAI has implemented strict policies for its partner organizations, requiring consent from individuals being impersonated and clear disclosure to listeners that the voice is AI-generated. The company is urging policymakers and developers to take proactive measures to prevent misuse of the technology.
In light of potential concerns, OpenAI has proposed creating a no-go voice list to safeguard prominent voices from unauthorized replication. Additionally, suggestions have been made to reconsider voice-based security authentication in sectors like banking and to develop methods for detecting fake voices.
While OpenAI has not confirmed if the tool will be publicly released, it emphasizes the importance of addressing ethical implications and ensuring responsible use of synthetic voices. The company is calling for a dialogue on the responsible deployment of this technology and how society can adapt to its capabilities.