The recent saga surrounding OpenAI’s creation of an AI model with a voice similar to Scarlett Johansson has sparked controversy and raised concerns about the company’s practices.
Johansson expressed shock and anger upon hearing the released demo, as it closely resembled her voice despite her prior refusal to be involved. OpenAI’s CEO, Altman, made a reference to Johansson in a post following the launch event, further adding fuel to the fire.
The company has since pulled the voice, named Sky, and issued a statement claiming it was not intended to resemble Johansson. However, this incident has brought to light a broader issue of OpenAI using intellectual property without explicit permission.
Critics have accused OpenAI of training its AI models using content from authors, publishers, and artists without proper consent. While some, like Johansson, were asked for permission but declined, others may not have been consulted at all.
The company is facing legal battles with authors concerned about their books being used without permission and The New York Times questioning the use of its content by OpenAI’s AI models.
Furthermore, Sony Music has raised concerns about unauthorized use of songs from its artists to train AI. OpenAI’s strategy of ask forgiveness, not permission has sparked legal troubles and raised questions about the ethics of its practices.
As AI continues to evolve and be integrated into various industries, it is crucial for companies like OpenAI to prioritize transparency, consent, and proper licensing agreements to avoid further legal and ethical dilemmas.