The EU AI Act, a pivotal piece of legislation aimed at regulating artificial intelligence (AI) systems, is currently facing uncertainty and hangs in the balance due to disagreements over the regulation of ‘foundation’ models. These models, such as GPT-4, Claude, and Llama, are AI models that have been trained on a massive scale. Recent lobbying efforts by Big Tech and open-source companies like Mistral, which is advised by Cédric O, a former digital minister for the French government, have led the French, German, and Italian governments to advocate for limited regulation of such models. This has been viewed by some as a power grab that could undermine the effectiveness of the EU AI Act.
In response to these efforts, proponents of including foundation models in the EU AI Act regulations have fought back. The Future of Life Institute, the organization behind the six-month ‘pause’ letter, has published a new open letter urging the German government not to exempt these models from the Act. Prominent AI researchers Geoffrey Hinton and Yoshua Bengio, who have expressed concerns over existential AI risks, as well as AI critic Gary Marcus, have signed the letter. In addition, Bengio and other French experts have published a joint op-ed in Le Monde, condemning the attempts by Big Tech to undermine the legislation.
The reasons behind this roadblock in the legislation’s progress can be understood by examining the recent drama surrounding OpenAI. The conflicts within OpenAI, which resulted in CEO Sam Altman being initially fired and then reinstated after board changes, shed light on the competing perspectives in the EU AI Act debates. These debates revolve around the potential commercial profit of AI and the concerns about the risks it poses.
At OpenAI, Altman and president Greg Brockman were focused on pursuing commercial profit opportunities to fund the development of artificial general intelligence (AGI). However, three non-employee members of the board, Adam D’Angelo, Tasha McCauley, and Helen Toner, were more concerned about the safety risks associated with AGI-like technology. They were willing to shut down the company rather than allow the release of what they perceived as high-risk technology. This division within OpenAI reflects the broader debates around AI regulation and risks.
The Effective Altruism movement, to which the three non-employee board members have ties, also plays a role in the lobbying efforts relating to the EU AI Act. The movement has emphasized the idea that AI poses existential risks, and the Future of Life Institute shares these concerns. On the other hand, Big Tech, including OpenAI, has actively lobbied against stringent regulations. Altman, for instance, has offered mixed messages on AI regulation, publicly advocating for it while privately seeking to water down the E.U.’s AI Act.
The OpenAI drama has prompted calls for strict regulation of Big Tech and skepticism regarding its ability to self-regulate. Gary Marcus has cited the OpenAI controversy as evidence that Big Tech cannot be trusted to regulate itself. Brando Benifei, one of the lawmakers leading negotiations on the EU AI Act, has similarly expressed doubts about voluntary agreements and emphasized the need for strong regulation.
The future of the EU AI Act remains uncertain, with further negotiations ongoing. A final trilogue is set to take place on December 6, and time is running out. The Spanish Council Presidency only has a month left before Belgium takes over in January 2024, and there will be increased pressure to reach an agreement under Belgian leadership due to the upcoming European elections in June 2024. Failure to pass the legislation would be a significant setback for the EU, which has positioned itself as a global pioneer in AI regulation.
In conclusion, the EU AI Act hangs in the balance as debates continue over the regulation of foundation models. The lobbying efforts by Big Tech and effective altruism proponents, as well as the recent drama surrounding OpenAI, have intensified the discussions. The outcome of these debates will determine the effectiveness of the EU AI Act and its ability to address the risks associated with AI.