Delegates from the European Commission, the European Parliament, and 27 member countries have reached a significant milestone in finalizing the groundbreaking AI Act. The legislation aims to regulate AI companies like OpenAI and Google, specifically focusing on generative AI tools. Negotiators have successfully agreed on controls for tools such as OpenAI’s ChatGPT and Google’s Bard, which have the ability to produce content upon command.
The AI Act is poised to become the most comprehensive and far-reaching legislation on artificial intelligence in the western world. As the US government and other major governments have yet to take significant action in this area, the European Union has taken the lead in establishing a landmark AI policy. Formal adoption of the regulation is expected soon, with policymakers aiming to pass it before the European elections in June.
Finding a balance between nurturing AI startups and addressing potential societal risks posed by generative tools has been a challenge for the EU. In the negotiations, countries like France and Germany have expressed concerns about rules that could potentially disadvantage local companies. However, despite these obstacles, officials remain optimistic about reaching a deal.
The proposed plan by EU policymakers includes requirements for developers of AI models like ChatGPT. These developers would be required to maintain information on model training, summarize copyrighted material used, and appropriately label AI-generated content. AI systems considered to pose systemic risks would be subject to an industry code of conduct, which would mandate cooperation with the commission, incident monitoring, and reporting.
The proposed regulations also address the need to regulate foundation models, which serve as the base for AI applications. Researchers have discovered unexpected behaviors in these models, leading to misleading responses in some cases. To address this, the EU proposes that companies transparently document their system’s training data and capabilities, demonstrate efforts to reduce risks, and undergo audits by external researchers.
While influential EU countries like France, Germany, and Italy have contested these proposals, arguing for self-regulation by generative AI model makers, the EU is determined to strike the right balance. Stricter regulations could potentially impact European companies’ competitiveness against major US players like Google and Microsoft.
As discussions continue and the technical details of the AI Act are ironed out, the European Commission remains committed to finalizing this groundbreaking legislation. The hope is that the AI Act will not only protect the interests of AI startups but also address the potential risks associated with generative AI tools. With the EU taking the lead in this area, the world is watching as the region sets the global standard for regulating artificial intelligence.