OpenAI, the renowned research laboratory, is lobbying European officials to ease the proposed AI Act that would place stringent regulations on high-risk AI systems, such as facial recognition and social scoring. The company is arguing that its general-purpose AI systems, including GPT-4, should not fall under the high-risk category and therefore be exempt from the Act’s regulations. OpenAI also believes that the requirements for transparency, traceability, and human oversight in the Act are too burdensome, which could hinder innovation. Though the lobbying efforts have been successful to some extent, it remains unclear whether these efforts will have a long-term impact. The European Parliament and the Council of the European Union are still negotiating the AI Act, and it is possible that the final version of the Act will have stricter regulations for general-purpose AI.
The proposed AI Act aims to regulate systems that pose an unacceptable level of risk, such as tools that forecast crime or assign social scores. It also introduces new limitations on high-risk AI that could sway voter opinions or damage people’s health. The legislation also establishes new rules for generative AI, requiring content produced by systems like ChatGPT to be labeled and disclosing summaries of copyrighted data used for training. Earlier this month, the European Parliament voted in favor of the AI Act, and the Act now goes to the Council of the European Union for approval.
The debate around the AI Act brings attention to the tension between the need to regulate AI for safety and the need to promote innovation. OpenAI’s lobbying efforts suggest that AI companies prioritize protecting their profits over ensuring that AI is used responsibly and safely. While the Act is a significant step forward in regulating AI, it is important to balance this regulation with innovation and ensure that the Act’s implementation effectively safeguards against harm from AI. The AI Act will set the standard for AI regulation globally, and monitoring its implementation is crucial for protecting people from AI-related harms.