Title: Key Similarities and Differences: EU AI Act and US Executive Order Shape Global AI Governance
In October 2023, the White House released an Executive Order (EO) outlining a comprehensive strategy to support the development and deployment of safe and secure AI technologies, just days before an international AI Safety Summit in the UK. Meanwhile, the European Commission proposed the Regulation Laying Down Harmonized Rules on Artificial Intelligence (the EU AI Act) in 2021, which is currently undergoing negotiations. Both the EO and the AI Act play significant roles in shaping global AI governance and regulation. Let’s delve into the key similarities and differences between the two approaches.
Comparison in Approach:
The EU’s AI Act aims to establish a new regulation modeled on EU product-safety legislation, imposing detailed technical and organizational requirements on AI system providers and users. Providers of high-risk AI systems would bear the most obligations, covering areas such as data governance, training, testing and validation, conformity assessments, risk management systems, and post-market monitoring. The Act also prohibits certain uses of AI systems and enforces transparency obligations.
In contrast, the EO does not introduce new legislative obligations. Instead, it provides directions to government agencies, including instructing the Department of Commerce to develop rules requiring disclosures from companies involved in AI model development or infrastructure under specific circumstances. The EO has a broader scope, encompassing social issues like equity, civil rights, and worker protection. Additionally, it directs the State Department to lead international efforts in establishing AI governance frameworks.
Another noteworthy difference lies in enforcement. The proposed AI Act incorporates a complex oversight and enforcement regime, with potential penalties of up to EUR 30 million or 2-6% of global annual turnover for violations. Conversely, the EO does not include enforcement provisions.
Areas of Common Ground:
Both the AI Act and the EO focus on high-risk AI systems. The AI Act categorizes such systems based on risk and imposes significant compliance requirements. These requirements entail designing AI systems to enable record-keeping, facilitating human oversight, and ensuring an appropriate level of accuracy, robustness, and cybersecurity. The EU Parliament’s version of the AI Act proposes additional obligations for foundation models, defined as AI models trained on broad data for general output adaptable to various tasks.
Similarly, the EO concentrates on high-risk AI systems, mandating developers to share safety test results and critical information related to dual-use foundation models posing serious security risks. The red-teaming and reporting requirements are applicable to models meeting certain technical qualifications outlined in the EO.
Both the AI Act and the EO address transparency requirements. The AI Act necessitates that AI systems designed to interact with individuals must be distinguishable as such, while users of AI systems involved in emotion recognition, biometric categorization, or generating manipulated content like deepfakes must inform people exposed to these systems. The EO tackles transparency by requiring a report identifying standards, tools, methods, and practices for authenticating, labeling, detecting, and preventing synthetic content. The guidance for labeling and authenticating synthetic content will be issued by the Director of the Office of Management and Budget (OMB) after the report.
Moreover, both the AI Act and the EO emphasize the importance of standards. The AI Act promotes the development of harmonized technical standards for AI systems, along with the creation of AI regulatory sandboxes to encourage compliance within a controlled environment. The EO establishes guidelines for AI development issued by the U.S. National Institute for Standards and Technology, with an aim to achieve consensus with industry standards. The EO also directs the Secretary of State to devise a plan for global engagement in developing AI standards.
In conclusion, while there are notable differences between the EU AI Act and the US Executive Order, such as enforcement mechanisms and approach, they also share common ground concerning high-risk AI systems and transparency requirements. Both approaches recognize the need for standards and emphasize their importance in shaping the future of AI governance. Collaboration between the US and EU in this domain is further facilitated by the U.S.-EU Trade and Technology Council’s joint Roadmap for Trustworthy AI and Risk Management, aiming to advance collaborative approaches in international AI standards bodies.
Sources:
– EU AI Act: [Link to the blog post]
– US Executive Order: [Link to the blog post]
– U.S.-EU Trade and Technology Council’s joint Roadmap for Trustworthy AI and Risk Management of December 2022: [Link to the blog post]