Former employees of OpenAI and Google DeepMind have raised concerns about the risks posed by AI, suggesting that it could potentially lead to human extinction. A group of 13 individuals, including former employees of leading AI companies such as OpenAI and DeepMind, have come together to express these concerns. They are calling on AI companies to adopt principles that allow employees to voice their worries about AI risks without facing negative consequences.
The group emphasizes the importance of having safeguards in place that enable researchers to freely express their criticisms of new AI advancements and involve the public and policymakers in shaping the future of AI innovation. They highlight the lack of effective oversight for companies developing powerful AI technologies, including artificial general intelligence (AGI), which could surpass human intelligence.
The letter urges AI companies to follow four essential principles, including refraining from stifling criticism, establishing confidential channels for employee concerns, fostering a culture of open criticism, and not retaliating against employees who disclose risk-related information. In response to these concerns, OpenAI CEO Sam Altman clarified the company’s stance and reversed certain contentious terms in separation agreements.
While OpenAI made changes, critics remained unsatisfied, prompting the company to eliminate the non-disparagement clause and equity clawback provisions from its agreements. This move was a clear demonstration of OpenAI’s commitment to transparency and accountability. The concerns raised by former employees have sparked discussions about the need for responsible AI development and proper oversight to prevent potential risks associated with advanced AI technologies.