EU lawmakers are expected to pass new rules governing artificial intelligence (AI) systems like ChatGPT, with a focus on protecting citizens from potential risks while fostering innovation. The rules, known as the AI Act, take a risk-based approach, imposing stricter requirements on high-risk AI providers. These providers must conduct risk assessments and ensure compliance with the law before making their products public.
The European Union aims to set a global standard for trustworthy AI and regulate the technology in a balanced and proportionate manner. Violations of the rules can result in fines ranging from 7.5 million to 35 million euros ($8.2 million to $38.2 million), depending on the severity of the infringement and the size of the company.
The AI Act prohibits the use of AI for predictive policing, as well as systems that use biometric information to infer an individual’s race, religion, or sexual orientation. Real-time facial recognition in public spaces is also banned, with exceptions for law enforcement that require approval from a judicial authority before deployment.
With the expected approval of the AI Act by EU member states in April and formal publication in May or June, the rules will come into force within two years, except for AI models like ChatGPT, which will have a 12-month implementation period. This has sparked lobbying efforts by various stakeholders, including tech firms like Google and Microsoft.
While some watchdogs warn of potential weakening of the rules due to corporate lobbying, the EU Commissioner emphasizes the regulation’s balanced nature. Tech lobbying groups have expressed concerns that the rules could hinder innovation and competitiveness in the European market.
Achieving a proper implementation of the AI Act will be crucial to balancing regulatory requirements with the need for innovation in the evolving AI landscape. The focus remains on ensuring that the rules do not unduly burden companies while upholding ethical standards and fostering technological advancement.