President Joe Biden has taken a significant step toward governing artificial intelligence (AI) by signing an executive order that aims to ensure the safety and trustworthiness of the technology. In his view, AI is all around us and has the potential to bring immense benefits, but it also carries risks that need to be addressed.
The order, which may need further support from Congress, seeks to guide the development of AI in a way that allows companies to profit while safeguarding public safety. To achieve this, the order requires leading AI developers to share safety test results and other relevant information with the government, using the Defense Production Act.
Additionally, the National Institute of Standards and Technology will create standards to ensure that AI tools are safe and secure before they are released to the public. The Commerce Department will issue guidance on labeling and watermarking AI-generated content, in order to differentiate between authentic interactions and those generated by software.
The executive order also covers a wide range of other areas, including privacy, civil rights, consumer protections, scientific research, and worker rights. The aim is to address these issues in relation to AI and ensure that its deployment is responsible and beneficial.
Biden’s chief of staff, Jeff Zients, highlighted the urgency with which the president approached the issue, emphasizing the need to move as fast as the technology itself. This sense of urgency stems from the recognition that the government was late in addressing the risks associated with social media, leading to mental health issues among US youth. Biden does not want to repeat the same mistake with AI.
The order builds on voluntary commitments made by technology companies and forms part of a broader strategy that involves congressional legislation and international diplomacy. The Biden administration is leveraging the levers it can control to shape private sector behavior and set an example for the federal government’s own use of AI.
Various governments worldwide have also taken steps to establish protections and regulations around AI. The European Union is nearing the final stages of passing a comprehensive law to rein in AI harms, while China has implemented its own rules. The United Kingdom is seeking to position itself as an AI safety hub, and the Group of Seven nations recently agreed on AI safety principles and a voluntary code of conduct for developers.
In the United States, the focus on AI is particularly significant due to the country’s position as a hub for leading AI developers. Tech giants such as Google, Meta, and Microsoft, as well as startups like OpenAI, are based on the country’s West Coast. The White House has already secured commitments from these companies to implement safety mechanisms in their AI models.
However, the Biden administration also faced pressure from Democratic allies, including labor and civil rights groups, to ensure that its policies address real-world harms caused by AI. One of the challenges the administration grappled with was how to regulate law enforcement’s use of AI tools, such as facial recognition technology, which has been linked to racial biases and mistaken arrests.
While the EU’s forthcoming AI law bans real-time facial recognition in public, Biden’s executive order falls short of such explicit restrictions. Instead, federal agencies will review how AI is being used in the criminal justice system, leaving room for improvement in protecting individuals from potential harms.
Overall, the executive order represents a significant initiative in governing AI and ensuring its responsible and safe development. It sets the stage for further action by Congress and demonstrates the Biden administration’s commitment to leading on AI regulation and standards. By addressing privacy, civil rights, and other concerns related to AI, the order aims to strike a balance between harnessing the technology’s benefits and mitigating potential risks.
The guidance within the order will be implemented over the course of 90 to 365 days. It is a comprehensive and proactive approach to governing AI, reflecting the need to keep pace with the rapidly evolving technology and its implications.