A few days ago, technology leader Sam Altman made an appeal to the US Senate Judiciary Committee for the regulation of artificial intelligence (AI). OpenAI’s CEO suggested that Congress should create a new federal body to oversee the use of large language models such as GPT-4, and to establish a licensing system, auditing and testing requirements. His proposal was well-received among legislators, as Altman had actively engaged with them during a private event just before his testimony.
OpenAI, a technology company that Altman leads, is recognized as a leader in the AI space. Their popular applications – ChatGPT, an AI-driven chatbot, and DALL-E, an AI that generates images – have made Altman a popular figure in the AI world.
Unfortunately, his charm offensive was not equally successful on the other side of the Atlantic. When Altman visited Europe last week, he was vocal about his disapproval of the European Union’s (EU) proposed AI Act, stating that it would involve ‘over-regulation’ and that OpenAI’s operations could be halted in Europe if the regulations were too onerous. This statement was met with apprehension by lawmakers, who felt that they were being ‘blackmailed’ by American companies.
In response to this criticism, Altman clarified at the end of the week that he had no plans to move OpenAI out of the EU. He and Google CEO Sundar Pichai had been in Europe to make an impression on regulators, and Pichai suggested that the companies should make a voluntary agreement – dubbed the AI Pact – to bridge the gap before official legislative regulation was passed and enforce.
OpenAI is continuously innovating and pushing the envelope when it comes to AI, making them an attractive partner to legislators. But it is apparent that they are aware of the responsibility this power carries, and will pursue measures to ensure its safe and ethical use. Whether the AI Act, the AI Pact, or a combination of both, satisfies the needs of both the technology companies and the regulators remains to be seen.