Biden’s executive order on AI: Preparing for the Expected Boom in 2024
2024 is anticipated to be a pivotal year for the mainstream adoption of artificial intelligence (AI), particularly generative AI systems like OpenAI’s ChatGPT. The Economist, Forbes, and Goldman Sachs have all published headlines predicting the widespread integration of generative AI in various sectors. While AI has the potential to bring about significant advancements, it also presents both positive and negative implications.
Recognizing this, President Biden signed an extensive executive order on October 30th, setting forth new federal standards for safe, secure, and trustworthy AI. This order represents the most far-reaching official action by the U.S. government concerning AI to date.
The executive order emphasizes the importance of responsible AI utilization in addressing urgent challenges and promoting prosperity, productivity, innovation, and security. However, it also acknowledges that irresponsible AI deployment could lead to societal harms such as fraud, discrimination, bias, disinformation, workforce displacement, diminished competition, and national security risks.
While the order is a promising step towards establishing a responsible framework for AI regulation, it is just the beginning—a mere framework that will require time to gauge its effectiveness once policies are implemented.
A key limitation is that the President has more control over how the federal government utilizes AI compared to the private sector. This implies that congressional action and global agreements will be crucial for enacting the provisions stated in the order.
Considering this, here are four priorities that policymakers and legislators should uphold as they work towards implementing the executive order:
1. Balancing AI Regulation and Innovation: It is crucial for federal regulatory efforts to strike the right balance between addressing legitimate concerns regarding AI while still fostering innovation. Innovation is essential for fostering better decision-making, operational efficiency in businesses, improved healthcare, optimized energy management, stronger cybersecurity, and many other AI benefits. The Software and Industry Information Association emphasizes this point, cautioning against impeding innovation critical to realizing AI’s potential in addressing societal challenges.
2. Inclusive Approach: The order originated from voluntary commitments made by seven AI companies and included public input through the Blueprint for an AI Bill of Rights. This inclusive approach should be sustained in 2024, with insights from various stakeholders, including industry, academia, and the general public. This involvement is vital in identifying the unique opportunities and risks associated with AI, shaping policies, and establishing effective and reasonable regulations.
3. Geopolitical Implications: As AI-powered cyberwarfare emerges as a potent tool for disrupting world order, it is imperative to focus on AI development that safeguards national security interests. The United Kingdom, European Union, and Canada have already released guidelines emphasizing ethical and responsible AI development. The U.S. must not lose sight of AI’s geopolitical consequences.
4. Global Collaboration: Through the executive order, the U.S. consulted with numerous countries, recognizing AI as a global issue. Strengthening collaborations with countries such as Australia, Brazil, Canada, the European Union, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, and the U.K. is crucial in tackling AI’s challenges collectively.
With the executive order paving the way, 2024 will likely witness an accelerated adoption of AI alongside governance initiatives. These four steps will help maximize AI’s potential as a force for good while minimizing its risks. The upcoming years promise an exciting path towards shaping AI regulation and its impact on society.