EU AI Act Faces Roadblock: Big Tech’s Lobbying Threatens Landmark Legislation

Date:

The EU AI Act, a pivotal piece of legislation aimed at regulating artificial intelligence (AI) systems, is currently facing uncertainty and hangs in the balance due to disagreements over the regulation of ‘foundation’ models. These models, such as GPT-4, Claude, and Llama, are AI models that have been trained on a massive scale. Recent lobbying efforts by Big Tech and open-source companies like Mistral, which is advised by Cédric O, a former digital minister for the French government, have led the French, German, and Italian governments to advocate for limited regulation of such models. This has been viewed by some as a power grab that could undermine the effectiveness of the EU AI Act.

In response to these efforts, proponents of including foundation models in the EU AI Act regulations have fought back. The Future of Life Institute, the organization behind the six-month ‘pause’ letter, has published a new open letter urging the German government not to exempt these models from the Act. Prominent AI researchers Geoffrey Hinton and Yoshua Bengio, who have expressed concerns over existential AI risks, as well as AI critic Gary Marcus, have signed the letter. In addition, Bengio and other French experts have published a joint op-ed in Le Monde, condemning the attempts by Big Tech to undermine the legislation.

The reasons behind this roadblock in the legislation’s progress can be understood by examining the recent drama surrounding OpenAI. The conflicts within OpenAI, which resulted in CEO Sam Altman being initially fired and then reinstated after board changes, shed light on the competing perspectives in the EU AI Act debates. These debates revolve around the potential commercial profit of AI and the concerns about the risks it poses.

See also  French and European AI Act: Urgent Regulation Needed to Tackle Threats of Generative AI, France

At OpenAI, Altman and president Greg Brockman were focused on pursuing commercial profit opportunities to fund the development of artificial general intelligence (AGI). However, three non-employee members of the board, Adam D’Angelo, Tasha McCauley, and Helen Toner, were more concerned about the safety risks associated with AGI-like technology. They were willing to shut down the company rather than allow the release of what they perceived as high-risk technology. This division within OpenAI reflects the broader debates around AI regulation and risks.

The Effective Altruism movement, to which the three non-employee board members have ties, also plays a role in the lobbying efforts relating to the EU AI Act. The movement has emphasized the idea that AI poses existential risks, and the Future of Life Institute shares these concerns. On the other hand, Big Tech, including OpenAI, has actively lobbied against stringent regulations. Altman, for instance, has offered mixed messages on AI regulation, publicly advocating for it while privately seeking to water down the E.U.’s AI Act.

The OpenAI drama has prompted calls for strict regulation of Big Tech and skepticism regarding its ability to self-regulate. Gary Marcus has cited the OpenAI controversy as evidence that Big Tech cannot be trusted to regulate itself. Brando Benifei, one of the lawmakers leading negotiations on the EU AI Act, has similarly expressed doubts about voluntary agreements and emphasized the need for strong regulation.

The future of the EU AI Act remains uncertain, with further negotiations ongoing. A final trilogue is set to take place on December 6, and time is running out. The Spanish Council Presidency only has a month left before Belgium takes over in January 2024, and there will be increased pressure to reach an agreement under Belgian leadership due to the upcoming European elections in June 2024. Failure to pass the legislation would be a significant setback for the EU, which has positioned itself as a global pioneer in AI regulation.

See also  EU introduces AI SCCs for procuring high-risk systems: Be cautious!

In conclusion, the EU AI Act hangs in the balance as debates continue over the regulation of foundation models. The lobbying efforts by Big Tech and effective altruism proponents, as well as the recent drama surrounding OpenAI, have intensified the discussions. The outcome of these debates will determine the effectiveness of the EU AI Act and its ability to address the risks associated with AI.

Frequently Asked Questions (FAQs) Related to the Above News

What is the EU AI Act?

The EU AI Act is a piece of legislation aimed at regulating artificial intelligence systems within the European Union.

What is the current roadblock faced by the EU AI Act?

The legislation is facing uncertainty and hangs in the balance due to disagreements over the regulation of 'foundation' models, which are AI models trained on a massive scale.

Who is lobbying against the regulation of foundation models?

Big Tech companies and open-source companies like Mistral, advised by Cédric O, a former digital minister for the French government, have been lobbying against the regulation of foundation models.

How have proponents of including foundation models in the EU AI Act responded to the lobbying efforts?

Proponents have fought back by publishing open letters and articles, urging governments not to exempt these models from the Act. Prominent AI researchers and critics have also expressed their concerns and condemnation of Big Tech's attempts to undermine the legislation.

How does the OpenAI drama relate to the debates around the EU AI Act?

The conflicts within OpenAI shed light on the competing perspectives in the debates. The disagreements within OpenAI's board reflect the broader debates on AI regulation and risks, including concerns about commercial profit and safety risks.

What role does the Effective Altruism movement play in the lobbying efforts?

The Effective Altruism movement, which emphasizes the existential risks posed by AI, has ties to the three non-employee members of OpenAI's board. The Future of Life Institute shares these concerns and has been actively involved in lobbying efforts.

What stance has Big Tech taken on AI regulation?

Big Tech, including OpenAI, has actively lobbied against stringent regulations. While some members have publicly advocated for AI regulation, there have been mixed messages suggesting an attempt to water down the EU AI Act.

What is the outlook for the EU AI Act?

The future of the EU AI Act remains uncertain, with negotiations ongoing. A final trilogue is scheduled, and there is pressure to reach an agreement due to the upcoming European elections. Failure to pass the legislation would be a setback for the EU's position as a global pioneer in AI regulation.

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Obama’s Techno-Optimism Shifts as Democrats Navigate Changing Tech Landscape

Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?

Tech Evolution: From Obama’s Optimism to Harris’s Vision

Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?

Tonix Pharmaceuticals TNXP Shares Fall 14.61% After Q2 Earnings Report

Tonix Pharmaceuticals TNXP shares decline 14.61% post-Q2 earnings report. Evaluate investment strategy based on company updates and market dynamics.

The Future of Good Jobs: Why College Degrees are Essential through 2031

Discover the future of good jobs through 2031 and why college degrees are essential. Learn more about job projections and AI's influence.