Here’s How the EU Will Regulate Advanced AI Models Like ChatGPT
New regulations are on the horizon for advanced artificial intelligence (AI) models, such as ChatGPT, in the European Union (EU). The EU has outlined plans to impose additional rules on models that are deemed to pose a systemic risk, based on the amount of computing power used during their training. The threshold for this risk designation is set at models that utilize more than 10 trillion trillion operations per second.
Currently, experts suggest that OpenAI’s GPT-4 is the only model that would automatically meet this threshold. However, the EU’s executive arm has the authority to designate other models as posing a systemic risk, taking into account factors such as the size of the data set, the presence of at least 10,000 registered business users in the EU, or the number of registered end-users.
In order to navigate this regulatory landscape, highly capable models like ChatGPT are expected to sign a code of conduct while the European Commission establishes more comprehensive and long-term controls. Failure to sign this code would require the model developers to demonstrate compliance with the AI Act. It is worth noting that the exemption for open-source models does not apply if they are considered to be posing a systemic risk.
The EU’s move to regulate advanced AI models comes as part of its efforts to ensure ethical and responsible development and deployment of AI technologies. By establishing rules and standards, the EU aims to strike a balance between fostering innovation and protecting the rights and safety of individuals.
Experts believe that these regulations will help instill public trust in AI technologies and ensure accountability among developers. According to Dr. Maria Lopez, an AI ethics researcher, Regulating advanced AI models is a significant step towards ensuring the responsible use of powerful AI technologies. It is crucial to have clear guidelines and oversight in place to address potential risks and safeguard societal interests.
While the EU’s AI regulations are still in the development phase, they represent a landmark move that positions Europe at the forefront of AI governance. As other countries and regions grapple with similar challenges, the EU’s approach will likely serve as a guiding example. As AI continues to play an increasingly prominent role in various sectors, robust regulations are essential to managing associated risks and maximizing the benefits.
In conclusion, the EU’s plans to regulate advanced AI models like ChatGPT reflect an important step towards addressing the potential risks associated with these powerful technologies. By establishing criteria to determine systemic risk and requiring models to adhere to a code of conduct, the EU aims to ensure responsible and ethical AI development and deployment. With these regulations, the EU is setting a precedent for global AI governance.