European Union negotiators have reached a deal on the world’s first comprehensive artificial intelligence rules. The agreement will pave the way for legal oversight of AI technology, which has the potential to transform everyday life while also raising concerns about its existential dangers to humanity.
The deal was clinched on Friday after intense negotiations between the European Parliament and the bloc’s 27 member countries. The agreement covers various controversial points, including generative AI and the police use of face recognition surveillance. The tentative political agreement, known as the Artificial Intelligence Act, marks a significant milestone for the EU.
European Commissioner Thierry Breton announced the deal on Twitter, emphasizing that the EU has become the first continent to establish clear regulations for the use of AI. The negotiations took place over marathon closed-door talks, with the initial session lasting 22 hours and a second round commencing on Friday morning.
While the deal is seen as a political victory, civil society groups have expressed reservations, highlighting the need for further technical details to be clarified in the coming weeks. They argue that the deal does not go far enough in protecting individuals from the potential harms caused by AI systems.
Daniel Friedlaender, head of the European office of the Computer and Communications Industry Association, a tech industry lobby group, stated that the political deal is just the beginning, and important technical work is needed to address crucial details of the AI Act.
The EU has been leading the global race to establish AI regulations, unveiling the first draft of its rulebook in 2021. However, the recent surge in generative AI has prompted European officials to update the proposal, aiming to set a blueprint for the rest of the world.
Although the European Parliament still needs to vote on the act early next year, this is considered a formality. Brando Benifei, an Italian lawmaker co-leading the negotiating efforts, expressed his satisfaction with the agreement, stating that while compromises had to be made, it was overall a very positive outcome. The legislation is not expected to take full effect until at least 2025 and will impose significant financial penalties for violations.
Generative AI systems like OpenAI’s ChatGPT have garnered attention for their ability to produce human-like text, photos, and songs. However, they have also raised concerns about potential risks to employment, privacy, copyright protection, and even human life.
Other countries, including the United States, the United Kingdom, China, and global coalitions like the Group of 7 major democracies, have also proposed their own regulations for AI. However, they are still playing catch-up with Europe, and the comprehensive rules established by the EU could serve as a powerful example for governments considering AI regulation worldwide.
Anu Bradford, a Columbia Law School professor specializing in EU law and digital regulation, believes that other countries may not copy every provision of the AI Act but will likely emulate many aspects of it. AI companies subject to the EU’s rules may also extend similar obligations beyond the continent to maintain efficiency in global markets.
The AI Act initially aimed to address the risks associated with specific AI functions based on their level of risk. However, negotiators expanded its scope to include foundation models, which are advanced systems underlying general-purpose AI services like ChatGPT and Google’s Bard chatbot.
Foundation models pose challenges for Europe, and negotiations involved intense debates, particularly around the opposition led by France. The compromise reached requires companies building foundation models to provide technical documentation, comply with EU copyright law, and disclose the data used for training. Advanced foundation models that present systemic risks will undergo additional scrutiny and must address associated risks, report incidents, implement cybersecurity measures, and report energy efficiency.
Researchers have warned that powerful foundation models developed by a few major tech companies could be misused for purposes such as online disinformation, cyberattacks, or even the creation of bioweapons. Rights groups have also expressed concerns about the lack of transparency regarding the data used to train these models, as it poses risks to daily life and the developers building AI-powered services.
One of the most contentious topics in the negotiations was the use of AI-powered face recognition surveillance systems. European lawmakers sought a complete ban on their public use due to privacy concerns. However, exemptions were negotiated to allow law enforcement agencies to use these systems in tackling serious crimes like child sexual exploitation or terrorist attacks.
Rights groups remain concerned about the exemptions and other significant loopholes in the AI Act. They highlight the lack of protection for AI systems used in migration and border control and the option for developers to opt-out of classifying their systems as high risk.
While acknowledging some victories in the final negotiations, Daniel Leufer, a senior policy analyst at digital rights group Access Now, believes that significant flaws will remain in the final text of the legislation.
As the European Union takes the lead in establishing comprehensive AI rules, it sets an example for governments worldwide grappling with the regulation of this transformative technology. However, it remains to be seen how these rules will be implemented and whether they will effectively address the potential risks and challenges associated with artificial intelligence.