OpenAI CEO Sam Altman recently defended the safety of the company’s AI technology amid growing concerns about potential risks and lack of safeguards for AI systems like ChatGPT. Altman assured developers at a Microsoft event in Seattle that OpenAI has put in a significant amount of work to ensure the safety of its models, particularly the GPT-4 large language model.
Despite recent controversies, including the departure of the team responsible for mitigating long-term AI risks, Altman urged developers to take advantage of the current opportunities presented by OpenAI’s technology. He emphasized the importance of not delaying projects and embracing the capabilities of generative AI.
As a close partner of Microsoft, OpenAI plays a crucial role in providing foundational AI technology for building tools. Altman acknowledged that while the GPT-4 model is not perfect, it is generally considered robust enough and safe for a wide range of applications.
However, questions about OpenAI’s commitment to safety have resurfaced following the dissolution of the superalignment group, which was dedicated to mitigating AI risks. The departure of the team’s co-leader, Jan Leike, raised concerns about the company’s focus on new products over safety measures.
In addition, OpenAI faced criticism from actress Scarlett Johansson after the release of a voice in ChatGPT that closely resembled hers. Altman apologized to Johansson but clarified that the voice, known as Sky, was not based on hers.
Despite these challenges, OpenAI continues to push forward with its AI technology, emphasizing the importance of safety and robustness in AI applications. As the company navigates controversy and addresses concerns, developers are encouraged to leverage OpenAI’s tools while remaining vigilant about potential risks in AI deployment.