The European Union (EU) is set to implement groundbreaking legislation that will regulate artificial intelligence (AI) systems, making it the world’s first comprehensive set of AI laws. The EU AI Act aims to ensure the safety of AI systems while upholding fundamental rights and EU values. This move comes at a time when the development of AI is rapidly progressing, and concerns are growing over the misuse of AI technology in scams and the spread of misinformation.
The legislation, which has been approved by EU governments, is now awaiting final sign-off from the European Parliament and is expected to come into effect in 2026. The law categorizes AI models based on their level of risk and imposes stricter rules on the riskiest applications. It also outlines separate regulations for general-purpose AI models like ChatGPT, which have broad and unpredictable uses.
Under the new law, AI systems carrying unacceptable risk will be banned, such as those that use biometric data to infer sensitive characteristics. High-risk applications, including the use of AI in hiring and law enforcement, will need to meet certain obligations, demonstrating safety, transparency, explainability, and adherence to privacy regulations to avoid discrimination. Lower-risk AI tools will still require developers to disclose when users are interacting with AI-generated content. Violating these rules could result in fines of up to 7% of a firm’s annual global profits.
Some researchers believe the act has the potential to encourage open science, while others raise concerns about potential innovation stifling. However, the EU has made efforts to ensure that the legislation does not hamper research endeavors. The final version of the act exempts AI models intended solely for research, development, or prototyping.
While the act may not directly impede the progress of research, it will indirectly impact researchers by enforcing transparency, reporting on models, and addressing potential biases. Experts believe that this framework will ultimately lead to greater adherence to good practices in AI research.
Critics argue that the law does not go far enough, claiming that it contains loopholes for military, national-security, law enforcement, and migration purposes. However, the EU asserts that it aims to strike a balance between effective regulation and preserving innovation.
The EU’s approach to regulation is different from that of the United States, with the EU emphasizing the importance of open-source AI to compete against the US and China. The European Commission is planning to establish an AI Office to oversee general-purpose models, receiving advice from independent experts. The office will assess the capabilities and risks associated with these models. Concerns have been raised about the capacity of a public body to adequately scrutinize submissions, given the potentially vast amount of data that would need to be reviewed.
In conclusion, the EU’s AI Act is a significant step toward regulating AI technology to ensure safety and protect fundamental rights. While some concerns remain, the act’s exemption for research purposes demonstrates the EU’s commitment to fostering innovation. The future of AI regulation will likely impact research practices, transparency, and the development of AI models as countries worldwide address the challenges posed by this rapidly advancing technology.