European politicians in two key committees have approved new rules to regulate artificial intelligence (AI) ahead of a landmark vote that could pave the way for the world’s first legislation on the technology.
The provisional legislation, endorsed by the European Parliament’s committees on civil liberties and consumer protection on Tuesday, aims to ensure that AI complies with the protection of fundamental rights. The legislation sets out to establish guardrails for AI applications across various industries, including banking, cars, electronic products, airlines, security, and police purposes. The goal is to strike a balance between promoting innovation and safeguarding against risks such as disinformation, job displacement, and copyright infringement.
The legislation, known as the AI Act, has been seen as a global benchmark for governments seeking to harness the benefits of AI while mitigating associated risks. It was proposed by the European Commission in 2021 but faced delays due to debates surrounding the regulation of language models and the use of AI by police and intelligence services.
The AI Act will also regulate foundation models or generative AI, such as those developed by Microsoft-backed OpenAI. These models are trained on large datasets and possess the ability to learn from new data to perform a range of tasks.
The endorsement of the legislation by the European Parliament committees has been hailed as a significant step towards comprehensive rules on AI in Europe. The MEP for Tech, Innovation, and Industry, Eva Maydell, described it as a result that instills social trust in AI while allowing companies the freedom to innovate. Meanwhile, Deirdre Clune, the MEP for Ireland South, highlighted the progress towards achieving comprehensive regulations for AI.
Earlier this month, European Union countries backed a deal on the AI Act, which aims to enhance control over government use of AI in biometric surveillance and the regulation of AI systems. France secured concessions to alleviate the administrative burden on high-risk AI systems and provide better protection for business secrets.
Under the legislation, tech companies operating in the EU will be required to disclose the data used to train their AI systems and subject their products to testing, particularly those used in high-risk applications like self-driving vehicles and healthcare.
The legislation prohibits the indiscriminate scraping of images from the internet or security footage to create facial recognition databases. However, exemptions are included for the real-time use of facial recognition by law enforcement in the investigation of terrorism and serious crimes.
The approval of the legislation by the European Parliament committees marks a significant milestone in the regulation of AI technology. If passed in the upcoming vote scheduled for April, the AI Act could set a precedent for governments worldwide in establishing comprehensive rules for the responsible and ethical use of AI while fostering innovation and protecting fundamental rights.