OpenAI, co-founded by CEO Sam Altman, has recently warned that the company may consider relocating and pulling out of the European Union (EU) if regulations proposed by the EU on artificial intelligence (AI) become official. Altman has specifically noted the potential difficulties with the advanced EU AI Act, which seeks to hold generative AI products, like OpenAI’s ChatGPT, to a higher standard of transparency. He has subsequently called on Congress to craft wise regulations for the use of AI in the United States.
OpenAI’s ChatGPT is a generative pre-training platform that can be trained on large datasets to generate images and text in response to user instructions. It is the kind of generative AI system that the EU legislature is proposing to classify as “high risk” and thereby require companies to inform users that the generated content was created by a computer and not a human author. Altman believes that the EU is “over-regulating” and plans to continue discussions on the topic with EU authorities in the hopes of reaching a balance between traditional European and U.S. approaches to AI regulation.
Meanwhile, British Prime Minister and AI company leaders have vowed to cooperate and foster AI development that benefits society. At an event in London, Altman testified before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, expressing his belief that government regulation will be essential in mitigating the risks of increasingly powerful AI models. His goal is to ensure that the U.S. maintains its leadership role in AI development.
In order to thrive in the AI space, it is important for companies to maintain a balance between legal compliance and innovation. Companies should review AI regulations carefully to ensure they are in compliance and provide accurate information to consumers or employees. To ensure transparency, companies should consider labeling AI-generated content, as the EU is proposing, and keep stakeholders up-to-date on any changes in the regulatory landscape. Companies should also evaluate the values espoused by their AI in order to make sure that their ultimate goal of AI development is to benefit society and not harm it.