The US has issued a new executive order aimed at improving the safety and security of artificial intelligence (AI). President Joe Biden’s order requires AI companies to be more transparent about how their models work and establishes new standards for labeling AI-generated content. The order also includes a provision that mandates developers to share safety test results with the US government if the technology poses a risk to national security. The executive order builds upon voluntary requirements for AI policy set by the White House earlier this year. While experts consider it a significant step forward, some argue that it falls short in protecting people from immediate harms caused by AI. Here are the key takeaways from the order:
New Rules for Labeling AI-Generated Content:
The executive order entails the Department of Commerce developing guidelines for labeling AI-generated content. AI companies will utilize this guidance to create labeling and watermarking tools that federal agencies are encouraged to adopt. The purpose is to enable individuals to easily identify if text, audio, or visual content has been created using AI, aiding in the combat against issues like deepfakes and disinformation. Leading AI companies such as Google and Open AI have already made a voluntary pledge to develop similar technologies.
A Focus on Watermarking and NIST Standards:
Experts have praised the executive order for its emphasis on watermarking and the standards set by the National Institute of Standards and Technology (NIST). Watermarking is an effective way to protect against AI-generated content manipulation, while NIST standards ensure reliability and consistency in AI applications. By incorporating these elements, the order aims to enhance the overall safety and security of AI technologies.
Enforcement and Congressional Legislation:
Although the executive order lacks specifics on enforcement, it represents a significant move towards AI regulation. However, executive orders can be easily overturned by future presidents and do not hold the same weight as congressional legislation. Given the polarized nature of Congress, the development of meaningful AI legislation in the near term seems unlikely. Without firm legislation, the long-term impact of the order remains to be seen.
Balancing AI Innovation and Immediate Harms:
The order’s focus on transparency and accountability in AI companies is seen as a positive step. However, critics argue that it does not go far enough to address the immediate harms caused by AI technologies. Striking a balance between encouraging AI innovation and ensuring adequate safeguards against potential risks will be an ongoing challenge.
In conclusion, the US executive order on AI rules and guidelines aims to enhance safety and security by promoting transparency, labeling, and adherence to NIST standards. While the order has been applauded as a significant advancement, concerns remain regarding enforcement and the need for comprehensive AI legislation.