The White House has secured AI safety commitments from leading tech companies, including Amazon, Google, Meta, Microsoft, and others. President Joe Biden hailed the voluntary effort as an important step in managing the expectations and risks associated with artificial intelligence technology. The commitments aim to ensure the safety of AI products before they are launched, with some promises calling for third-party oversight of commercial AI systems. However, the specifics of who will audit the technology or hold companies accountable have not been elaborated upon. The move comes amid concerns about the potential dangers of AI, such as spreading disinformation and other forms of harm.
As part of the voluntary commitments, creators of ChatGPT, OpenAI, along with startups Anthropic and Inflection, and four other tech giants have pledged to conduct security tests with independent experts. These tests will focus on preventing critical risks like biosecurity, cybersecurity, social harm, and theoretical dangers associated with advanced AI systems gaining control of physical systems or self-replicating. Additionally, the companies are working on ways to report system vulnerabilities and use digital watermarks to differentiate between real and AI-generated images, particularly deepfakes.
The White House has also pledged to publicly disclose flaws and risks associated with AI technology, including its impact on fairness and bias. While the voluntary effort is seen as an immediate means of addressing risks, the administration aims to work towards long-term legislation to regulate AI technology. To solidify their commitment, company executives are scheduled to meet with President Biden at the White House.
Advocates for AI regulation welcome the move as a starting point, but emphasize the need for broader public deliberations to effectively address the risks associated with AI. Some argue that closed-door consultations with business officials may not be sufficient to ensure necessary safeguards. Senate Majority Leader Chuck Schumer has expressed his intention to introduce legislation to regulate AI, while many tech executives have already called for regulation and visited the White House to discuss the matter earlier this year.
However, concerns have been raised about the potential advantages that emerging regulations might provide to well-funded companies like OpenAI, Google, and Microsoft, while smaller companies face greater compliance costs. The White House’s pledge currently applies to models considered stronger than existing industry standards.
Many countries worldwide are considering regulatory approaches to AI, including the European Union, which is negotiating comprehensive AI rules for its member countries. The United Nations Secretary-General, António Guterres, has also highlighted the need for global standards and the establishment of specialized UN agencies to manage AI. The White House has stated that it is already engaged in discussions with various countries about voluntary initiatives.
While the focus of the safety pledge is on addressing risks, concerns about other aspects of AI technology, such as its impact on jobs, market competition, environmental resources, and copyright concerns, have not been included. Notably, OpenAI recently announced a deal with The Associated Press to license its archive of news articles.
In conclusion, the White House has obtained voluntary commitments from leading tech companies to ensure the safety of AI products. While the move is considered a significant step, experts urge broader public deliberations and emphasize the need for further measures to hold companies and their products accountable. The goal is to strike a balance between innovation and regulation as AI technology continues to advance.