AI companies like OpenAI, Alphabet, and Meta Platforms have made voluntary commitments to the White House to implement safety measures in artificial intelligence (AI) content, including watermarking, according to US President Joe Biden. These companies, along with others like Anthropic, Inflection, Amazon, and Microsoft (OpenAI’s partner), have pledged to thoroughly test systems before releasing them and share information on risk reduction and cybersecurity investment.
In a White House event addressing concerns about the potential misuse of AI, Biden emphasized the need for vigilance and clarity regarding emerging technologies and their impact on US democracy. The commitments made by these companies mark a promising step forward, but Biden acknowledged that there is still a lot of work to be done.
The move by these companies is seen as a victory for the Biden administration’s efforts to regulate AI technology, which has experienced significant growth in investment and consumer popularity. Microsoft expressed its support for the president’s leadership in bringing the tech industry together to make AI safer, more secure, and more beneficial for the public.
Lawmakers around the world have been considering ways to mitigate the risks associated with generative AI, which uses data to create new content such as the human-like prose generated by ChatGPT. The EU recently drafted regulations requiring disclosure of AI-generated content, distinguishing deep-fake images from real ones, and implementing safeguards against illegal content.
While the US is lagging behind the EU in AI regulation, progress is being made. Senator Chuck Schumer called for comprehensive legislation to advance and ensure safeguards on AI, and Congress is currently considering a bill that would require political ads to disclose the use of AI in creating content.
During the White House meeting, Biden revealed that he is working on an executive order and bipartisan legislation focused on AI technology. The seven companies in attendance committed to developing a watermarking system that would be applied to all forms of AI-generated content, including text, images, audios, and videos. This watermarking will make it easier for users to identify deep-fake images or audios that may feature non-existent violence, facilitate scams, or manipulate pictures of politicians to portray them negatively.
The specific method by which the watermark will be evident in shared content remains unclear. In addition to watermarking, the companies have also pledged to prioritize user privacy, ensure AI technology is free of bias, and not be used for discrimination against marginalized groups. They will also focus on developing AI solutions for scientific purposes like medical research and climate change mitigation.
Overall, these commitments by leading AI companies demonstrate a significant step towards making AI technology safer and more beneficial for society. With the ongoing effort to regulate AI and the combined commitment of public and private sectors, the progress toward responsible and secure AI implementation continues.