Scarlett Johansson has recently made headlines after revealing that OpenAI approached her to use her voice for their ChatGPT 4.0 system. Johansson stated that after much consideration, she declined the offer due to personal reasons. However, she was surprised when a voice similar to hers, named Sky, was used during a demo of OpenAI’s new model last week.
OpenAI faced criticism for using a voice that users found to be too reminiscent of Johansson’s distinct tone. The company clarified that the voice actually belongs to a different professional actress, not an imitation of Scarlett Johansson. In response to the backlash, OpenAI has decided to pause the Sky voice and introduce additional voices to better cater to a diverse range of user preferences.
Johansson expressed her shock and disappointment at the situation, revealing that she was forced to hire legal counsel to address the issue. She emphasized the need for transparency and protection of individual rights in light of deepfake technology and the safeguarding of personal likenesses and identities.
This incident highlights the ongoing challenges and ethical considerations surrounding the use of AI voices that closely resemble those of celebrities. As advancements in technology continue to blur the lines between reality and artificial intelligence, it becomes crucial to establish clear guidelines and legislation to protect privacy and identity rights in the digital age.