OpenAI, Google, and other top AI companies have pledged to implement new safety measures for AI content, including the use of watermarks. This announcement comes as the Biden administration aims to regulate the technology, which has seen significant investment and popularity.
The companies involved in this commitment, including OpenAI, Alphabet (Google’s parent company), Meta Platforms (formerly known as Facebook), Anthropic, Inflection, Amazon, and Microsoft, have agreed to thoroughly test their AI systems before releasing them. They will also share information on how to reduce risks and invest in cybersecurity.
The rise of generative AI, which uses data to create new content, has prompted lawmakers worldwide to consider ways to address the potential risks to national security and the economy. In June, U.S. Senate Majority Chuck Schumer called for comprehensive legislation to ensure safeguards on artificial intelligence. Additionally, Congress is currently discussing a bill that would require political ads to disclose whether AI was used in their creation.
President Joe Biden is hosting executives from these seven companies at the White House to discuss the regulation of AI technology. As part of this effort, the companies have committed to developing a watermarking system for all forms of AI-generated content, including text, images, audios, and videos. The watermark will be embedded technically, allowing users to easily identify deep-fake images, audios, or videos. Such technology can be used to create misleading content, scams, or politically manipulated images.
The specific details of how the watermark will be evident during content sharing are yet to be determined.
Besides watermarking, the companies have also promised to prioritize user privacy in the development of AI, ensure the technology is free of bias, and not be used to discriminate against vulnerable groups. They have also pledged to leverage AI solutions to address scientific issues such as medical research and climate change mitigation.
By adhering to these voluntary commitments, the top AI companies aim to enhance the safety and reliability of AI technology. The inclusion of watermarks in AI-generated content will contribute to combating misinformation and protecting users from potential harm. With ongoing discussions at the government level and the collaboration between industry leaders, the regulation of AI is progressing in a manner that addresses both the opportunities and risks associated with this evolving technology.