The European Union (EU) has recently introduced the groundbreaking EU AI Act, which marks the world’s first comprehensive regulatory framework for artificial intelligence (AI). Designed to ensure that AI systems are human-centric, safe, and trustworthy, while also promoting innovation in AI, the AI Act is set to have a significant impact on businesses operating within the EU. With the Member States giving the green light to the AI Act and the European Council publishing the final text, it is now one step closer to implementation.
The EU AI Act covers the entire lifecycle of AI systems, starting from their development and extending to their usage and all stages in between. To help businesses prepare for compliance, Ashurst, a leading global law firm, has compiled a briefing that summarizes the key aspects of the AI Act and provides suggested steps to ensure compliance.
Businesses subject to the AI Act are urged to be proactive and assess how the regulations will apply to their operations. This involves mapping their AI systems and conducting a detailed assessment of their current systems, processes, and controls to identify any gaps in meeting the AI Act’s requirements. It is important to note that various obligations under the AI Act apply to different entities in the AI value chain, such as providers, distributors, importers, and deployers of AI systems.
The AI Act casts a wide scope and has the potential to impact a range of organizations. This includes businesses operating within the EU, those offering goods or services to EU citizens, and even entities outside the EU if their AI systems have an impact on EU citizens or are used within the EU. However, the AI Act does not apply to AI systems that fall outside the scope of EU law or have no impact on EU citizens or use in the EU.
One of the key features of the AI Act is its risk-based approach, where different requirements apply to different risk classes of AI systems. The Act categorizes AI systems into four risk classes: unacceptable risk, high risk, limited risk, and minimal risk. Providers of high-risk AI systems have specific obligations and must ensure compliance with the Act’s requirements. They are also required to appoint an authorized representative in the EU if they are based outside the EU.
Deployers of high-risk AI systems, which mainly comprise non-personal users, are also subject to particular obligations. These include ensuring that the AI system they use complies with the necessary requirements, maintaining technical documentation, and cooperating with national competent regulators.
Additionally, the AI Act introduces the concept of fundamental rights risk assessments. Both public and private body operators of high-risk AI systems must undertake these assessments, except when the system is intended for use in critical public infrastructure. These assessments aim to ensure compliance with fundamental rights in relation to AI systems.
The EU AI Act provides businesses with a two-year transition period for compliance from the date of its entry into force. During this transition period, the EU Commission plans to launch the AI Pact, which allows businesses to voluntarily commit to complying with specific obligations of the Act before the regulatory deadlines.
Failure to comply with the AI Act can result in significant fines. Similar to the General Data Protection Regulation (GDPR), the fines are capped at a percentage of the business’s global annual turnover in the previous financial year or a fixed amount, whichever is higher. Penalties will be effective, dissuasive, and proportionate, taking into consideration the interests and economic viability of SMEs and start-ups.
It is worth noting that the AI Act does not address liability for damages claims and compensation directly. To address this, the EU Commission has proposed two complementary liability regimes: the EU AI Liability Directive and the revised EU Product Liability Directive. These directives aim to provide redress for harm caused by AI systems and will be the focus of Ashurst’s next article in their Emerging Tech Series.
With the EU AI Act on the horizon, businesses must prepare for the new regulations. Understanding the Act’s requirements, conducting assessments, and taking necessary compliance measures are crucial steps to ensure the smooth integration of AI systems while safeguarding human-centric values and fundamental rights. By proactively embracing the changes brought forth by the AI Act, businesses can position themselves as leaders in the evolving AI landscape.