Tech giants and artificial intelligence (AI) research organization OpenAI have joined forces with the White House to address the risks associated with AI models. This collaboration aims to develop responsible AI by focusing on safety, security, and trust. The companies involved include Amazon, Meta Platforms, Microsoft, and Google, as well as AI startups Inflection and Anthropic.
The commitment from these tech firms involves a set of voluntary steps to prevent the misuse of advanced AI models. Before releasing a new AI model to the public, rigorous testing will be conducted with the help of independent experts. This will include red-teaming exercises to simulate potential malicious behavior and identify any flaws in the system. The testing efforts will specifically look for risks associated with the development of weapons and hacking campaigns, as well as societal risks such as bias, discrimination, and the creation of AI copies.
To further enhance model safety, the companies have agreed to incentivize third-party risk research by introducing bug bounty programs and other initiatives. This approach ensures that potential issues are identified by external researchers who can provide valuable insights.
Another critical aspect of this initiative is securing unreleased AI weights, which are integral components of neural networks. The companies will establish secure environments to store these weights, limit employee access, and implement insider threat detection programs. By safeguarding unreleased AI weights, leading AI developers can protect their systems against hackers.
Transparency is also a key principle of the AI safety commitments. The participating companies will release reports containing information about their AI models’ capabilities, and they will introduce mechanisms such as watermarks or provenance to help users identify whether audiovisual content was generated by an AI.
Furthermore, this collaboration will prioritize research into societal risks associated with AI, such as bias, discrimination, and privacy breaches. The companies will share best practices and establish a mechanism for ongoing collaboration.
The White House’s initiative, backed by the Biden administration, aims to address the risks posed by advanced AI models. Alongside these commitments, an executive order is currently being developed to further support these efforts.
With this collaborative approach, tech giants and OpenAI are taking significant steps to ensure the responsible development and use of AI. By focusing on safety, security, and trust, they aim to address potential risks and promote transparency within the AI industry.