EU Approves Groundbreaking AI Act: What Businesses Need to Know

Date:

The European Union (EU) has recently introduced the groundbreaking EU AI Act, which marks the world’s first comprehensive regulatory framework for artificial intelligence (AI). Designed to ensure that AI systems are human-centric, safe, and trustworthy, while also promoting innovation in AI, the AI Act is set to have a significant impact on businesses operating within the EU. With the Member States giving the green light to the AI Act and the European Council publishing the final text, it is now one step closer to implementation.

The EU AI Act covers the entire lifecycle of AI systems, starting from their development and extending to their usage and all stages in between. To help businesses prepare for compliance, Ashurst, a leading global law firm, has compiled a briefing that summarizes the key aspects of the AI Act and provides suggested steps to ensure compliance.

Businesses subject to the AI Act are urged to be proactive and assess how the regulations will apply to their operations. This involves mapping their AI systems and conducting a detailed assessment of their current systems, processes, and controls to identify any gaps in meeting the AI Act’s requirements. It is important to note that various obligations under the AI Act apply to different entities in the AI value chain, such as providers, distributors, importers, and deployers of AI systems.

The AI Act casts a wide scope and has the potential to impact a range of organizations. This includes businesses operating within the EU, those offering goods or services to EU citizens, and even entities outside the EU if their AI systems have an impact on EU citizens or are used within the EU. However, the AI Act does not apply to AI systems that fall outside the scope of EU law or have no impact on EU citizens or use in the EU.

See also  US and Vietnam Strengthen Ties to Counter China's Influence

One of the key features of the AI Act is its risk-based approach, where different requirements apply to different risk classes of AI systems. The Act categorizes AI systems into four risk classes: unacceptable risk, high risk, limited risk, and minimal risk. Providers of high-risk AI systems have specific obligations and must ensure compliance with the Act’s requirements. They are also required to appoint an authorized representative in the EU if they are based outside the EU.

Deployers of high-risk AI systems, which mainly comprise non-personal users, are also subject to particular obligations. These include ensuring that the AI system they use complies with the necessary requirements, maintaining technical documentation, and cooperating with national competent regulators.

Additionally, the AI Act introduces the concept of fundamental rights risk assessments. Both public and private body operators of high-risk AI systems must undertake these assessments, except when the system is intended for use in critical public infrastructure. These assessments aim to ensure compliance with fundamental rights in relation to AI systems.

The EU AI Act provides businesses with a two-year transition period for compliance from the date of its entry into force. During this transition period, the EU Commission plans to launch the AI Pact, which allows businesses to voluntarily commit to complying with specific obligations of the Act before the regulatory deadlines.

Failure to comply with the AI Act can result in significant fines. Similar to the General Data Protection Regulation (GDPR), the fines are capped at a percentage of the business’s global annual turnover in the previous financial year or a fixed amount, whichever is higher. Penalties will be effective, dissuasive, and proportionate, taking into consideration the interests and economic viability of SMEs and start-ups.

See also  Google Launches Project Gemini: AI Model Trained to Behave Like Humans, Advancing Chatbot and Smartphone Capabilities

It is worth noting that the AI Act does not address liability for damages claims and compensation directly. To address this, the EU Commission has proposed two complementary liability regimes: the EU AI Liability Directive and the revised EU Product Liability Directive. These directives aim to provide redress for harm caused by AI systems and will be the focus of Ashurst’s next article in their Emerging Tech Series.

With the EU AI Act on the horizon, businesses must prepare for the new regulations. Understanding the Act’s requirements, conducting assessments, and taking necessary compliance measures are crucial steps to ensure the smooth integration of AI systems while safeguarding human-centric values and fundamental rights. By proactively embracing the changes brought forth by the AI Act, businesses can position themselves as leaders in the evolving AI landscape.

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Advait Gupta
Advait Gupta
Advait is our expert writer and manager for the Artificial Intelligence category. His passion for AI research and its advancements drives him to deliver in-depth articles that explore the frontiers of this rapidly evolving field. Advait's articles delve into the latest breakthroughs, trends, and ethical considerations, keeping readers at the forefront of AI knowledge.

Share post:

Subscribe

Popular

More like this
Related

Obama’s Techno-Optimism Shifts as Democrats Navigate Changing Tech Landscape

Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?

Tech Evolution: From Obama’s Optimism to Harris’s Vision

Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?

Tonix Pharmaceuticals TNXP Shares Fall 14.61% After Q2 Earnings Report

Tonix Pharmaceuticals TNXP shares decline 14.61% post-Q2 earnings report. Evaluate investment strategy based on company updates and market dynamics.

The Future of Good Jobs: Why College Degrees are Essential through 2031

Discover the future of good jobs through 2031 and why college degrees are essential. Learn more about job projections and AI's influence.