The European Union’s Artificial Intelligence Act (AI Act) proposal attempts to regulate AI technology by identifying risk levels, assuring it follows EU law, and enforcing transparency requirements. The AI Act is still in the works, and many within the AI industry are questioning how it will affect them. One such organization is OpenAI, developers of the world’s leading natural language processing tool, ChatGPT. Unsurprisingly, OpenAI CEO, Sam Altman, has expressed his concern about the AI Act’s enforcement, even threatening to leave the EU if it does not meet the company’s expectations.
OpenAI is a nonprofit, artificial intelligence research company attempting to develop general artificial intelligence. OpenAI works to find concrete steps toward a safer and more equitable strategy of developing machine learning. In 2019, they released their GPT-based chatbot, ChatGPT, which rapidly gained popularity and enabled people to control machine learning-powered conversations.
The EU AI Act proposes to place transparency requirements on GPT tools like ChatGPT. These requirements would require GPT models to outline if AI was used to produce their content and that the content is not illegal. Altman does not explicitly state that ChatGPT will not comply with the rules, but he does express frustration over the current proposed language of the act.
The AI Act is a fundamental development for the AI industry. Thousands of organizations, including OpenAI, will face implications if the act is passed. It will be interesting to see how organizations within the industry react to the AI Act and whether it will be a net positive or negative for the AI industry.