US and UK Compete in AI Regulation Race, EU and China Join Fray: Global Cooperation or Tensions?
In a bid to assert their leadership in AI and set global regulations, both the United States and the United Kingdom are making significant strides. Recent developments in the AI regulatory landscape indicate a degree of global cooperation, although it appears to be cautiously limited.
The United Kingdom recently concluded an international AI summit to address potential risks associated with artificial intelligence. The summit brought together representatives from several nations, including the US, China, and Saudi Arabia, alongside prominent tech CEOs such as Elon Musk and Sam Altman.
However, tensions seem to be simmering between the US and UK. While the US extended its support for the summit by sending Vice President Kamala Harris, it also managed to upstage the UK’s efforts. President Joe Biden signed a sweeping executive order on AI, which he touted as the most significant action any government anywhere in the world has ever taken. Vice President Harris, in a subsequent speech at the US embassy in London, announced the establishment of a new AI Safety Institute, closely following the UK’s unveiling of a similar institution.
This sequence of events suggests some tension between the US and UK in their race to lead AI regulation. Both countries are vying to be at the forefront of regulating transformative new technologies, especially in light of perceived failures in regulating digital privacy and health on social media platforms.
While President Biden emphasizes the US’s role in AI regulation, the UK can highlight its diplomatic achievement in garnering agreement from the US, China, and other global powers on fundamental AI principles.
However, the race for AI regulation is not limited to the US and the UK alone. The European Union has already introduced a stringent AI Act in 2021, expected to be further refined and passed into law soon. Additionally, China has enacted laws requiring AI companies to register their services with the government and undergo a security review before entering the market.
The situation can be described as an AI regulation arms race with various actors aiming to demonstrate their proactivity and ambition in addressing AI-related issues, both in terms of timing and scope.
Different countries are taking distinct approaches to AI regulation. President Biden’s executive order primarily focuses on immediate AI risks related to security, bias, job displacement, and fraud. In contrast, the UK summit emphasized the potential long-term threats posed by advanced frontier AI models. However, the true measure of success lies in the execution of these policies.
While declarations and executive orders are a starting point, they are vague and lack enforceable mechanisms. President Biden’s executive order assigns responsibility to existing government agencies, while the UK’s Bletchley Declaration has no legal framework for enforcement. Both documents lack concrete measures to hold countries accountable for their behavior.
Though a clear global framework for handling AI is far from being established, these recent developments bring us a step closer to meaningful policies. While healthy competition drives innovation, a balanced approach is necessary to shape a robust regulatory landscape for AI.