Biden’s Ambitious AI Executive Order Puts the US at the Forefront of AI Regulation
US President Joe Biden has issued an extensive executive order on artificial intelligence (AI), positioning the United States as a frontrunner in the global conversation on regulating AI technologies. By doing so, the US has surpassed other countries in the race to govern AI, even overtaking Europe, which previously led the way with its AI Act. While Europe’s AI Act won’t fully take effect until 2025, Biden’s executive order catapults the US to the forefront of AI regulation discussions.
The executive order encompasses various initiatives aimed at regulating AI and addresses a range of associated concerns. From immediate issues like AI-generated deepfakes to job losses and potential existential threats posed by AI, the order seeks to tackle the potential harms posed by AI technologies.
Given the slow progress of US Congress in passing significant regulations regarding big tech companies, this executive order is seen as an attempt to bypass the frequently deadlocked Congress while initiating necessary action. One example is the order’s call for bipartisan data privacy legislation, although achieving true bipartisan support in the current political climate may prove challenging.
The order is set to be implemented over the next three months to one year and focuses on eight key areas. On the one hand, it addresses many concerns raised by academics and the public. For instance, the order includes a directive to provide official guidance on watermarking AI-generated content to mitigate the risk of deepfakes. It also requires companies to demonstrate the safety of their AI models before widespread deployment, ensuring they pose no national security or safety risks.
However, the order falls short in addressing certain pressing issues. It does not directly tackle the challenge of dealing with autonomous killer AI robots, a topic discussed at the recent United Nations General Assembly. The lack of clear stipulations regarding ethical guidelines for AI in the military is another concern. Additionally, the order fails to address the need to protect elections from AI-powered weapons of mass persuasion, which has already become a concern in various countries, including the recent election in Slovakia.
Despite these shortcomings, many initiatives in Biden’s executive order could serve as a model for other countries, including Australia. The order calls for providing guidance on preventing AI algorithms from discriminating against individuals in areas such as housing and government programs. It also highlights the importance of addressing algorithmic discrimination within the criminal justice system, where AI is increasingly utilized for high-stakes decisions.
One of the most controversial aspects of the order is the focus on regulating the potential harms posed by powerful frontier AI models. While some experts believe these models, developed by companies like Open AI and Google, present an existential threat to humanity, others argue that concerns may be overblown and detract from more immediate harms like misinformation and inequity.
To address the risks associated with frontier models, the executive order leverages extraordinary war powers, specifically the 1950 Defense Production Act. It requires companies to notify the federal government when training such models and share the results of safety tests conducted by internal hackers. However, policing the development of frontier models may prove challenging, as companies can continue their development overseas or within open-source communities.
Overall, the impact of Biden’s executive order is expected to be primarily on the government itself and its utilization of AI, rather than on businesses. Nevertheless, it represents a welcome step toward addressing the regulation of AI, overshadowing the upcoming AI Safety Summit in the UK led by Prime Minister Rishi Sunak. While challenges lie ahead, the order marks a significant move by the US government to assume a leading role in shaping the future of AI regulation.
Source: The Conversation (Au and NZ) – By Toby Walsh, Professor of AI, Research Group Leader, UNSW Sydney