A group of top AI scholars are calling on the European Union to ensure that their proposed rules for the technology cover recently popular “general purpose” AI tools such as OpenAI’s ChatGPT and Microsoft’s Bard. The experts argue that these tools can have damaging effects, and the E.U.’s AI Act should be focused on regulating these forms of artificial intelligence.
This group is comprised of prominent artificial intelligence researchers who, in December, signed an open brief urging European leaders to adopt regulations that would target such tools, as they are “just the tip of the iceberg”. Former Google AI ethicist Timnit Gebru and Mozilla Foundation President Mark Surman were two of the signatories of this brief. They assert, in their urging, that European officials should “take an expansive approach in the proposed rules to ‘high-risk’ applications”, covering tools like ChatGPT, DALL-E 2, and Bard.
Tech companies are rapidly integrating AI into everyday products and, as a result, it is crucial that high-risk AI tools are regulated throughout the product cycle. In order to make sure of this, the brief states that companies developing these models must be held accountable for the data and design choices being made. It also suggested that any legal language in the proposal that would allow AI developers a disclaimer should be dropped, as this would create a “dangerous loophole”.
Meanwhile, in the United States’ federal policy is still early in the exploration process of AI-specific regulations. According to Amba Kak, a former advisor to the Federal Trade Commission’s chair Lina Khan — who also penned a report calling for AI harms to be scrutinized more closely — the E.U. will likely become the first to have an ‘omnibus framework’ that would set a global precedent on the matter of AI regulations.
Furthermore, European Commissioner for Internal Market Thierry Breton recently remarked that the E.U.’s proposed AI regulations needed to be focused on new chatbots and other such AI tools, as these can “pose risks”. Breton continued to state that “we need a solid regulatory framework to ensure trustworthy AI based on high-quality data”.
OpenAI is an American artificial intelligence technology company based in San Francisco, California and founded in 2015 by Elon Musk, Sam Altman, and Ilya Sutskever. The company is dedicated to developing revolutionary artificial intelligence technologies aimed at positively contributing to the world and generating AI ‘superpowers’ that are located in the hands of corporations and the general public. OpenAI has developed the popular chatbot ChatGPT, which can be used in many different capacities.
Timnit Gebru is a former AI ethicist at Google who is of African descent and a leader in the field of ethical artificial intelligence. She has worked with leading companies in the field to promote the ethical considerations of AI, research on the impacts of racial bias in digital technology, and has advised policymakers on the responsible use of AI. Gebru believes that digital technology holds the potential to reduce systemic inequality but, in order to do this, companies and governments must prioritize ethical considerations. She is a vocal advocate for the regulation of AI and a signatory of the brief imploring the European Union to take a tougher stance on regulating AI tools.