As technology firms rush to integrate AI into more products, a number of prominent artificial intelligence (AI) researchers are calling on the European Union (EU) to take a more expansive view of their proposed regulations by expressly targeting general-purpose software tools such as ChatGPT and DALL-E 2.
The group argues that such an approach would set the global regulatory tone, and is necessary in order to prevent the potential harms caused by AI. The recent EU AI Act proposed transparency and safety requirements for specific high-risk uses of AI, but it sidestepped general purpose AI.
The amended version of the draft that was approved by the European Council in December expands the scope of the regulations by including “general purpose” tools such as chatbots as high risk as well. However, the draft has yet to be officially adopted and faces political hurdles by those who oppose its expansion.
Timnit Gebru, former Google AI ethicist, and Mark Surman, President of the Mozilla Foundation, are amongst dozens of AI scholars signing the brief, that urges E.U. officials to treat the use of AI like ChatGPT as “high-risk”. Additionally, the brief calls on the E.U. to take an “expansive” view when it comes to which products should be covered by their regulations, as “technologies such as ChatGPT, DALL-E2, and Bard are just the tip of the iceberg”.
As Europe is further ahead in the process of research and adoption of AI regulations than the United States, Amba Kak and Sarah Myers West, advisors to Federal Trade Commission Chair Lina Khan argue, adopting this technology-specific omnibus framework would set the global precedent.
Thus, crucial in any AI regulations would be to ensure that common-use AI tools are regulated throughout the product cycle, starting with the original development stage and holding the companies accountable for the data and design choices they make. Furthermore, the E.U. must avoid language that would allow AI developers to bypass regulation with the use of legal disclaimers.
OpenAI’s popular chatbot, ChatGPT, as well as Microsoft’s Bard, have generated significant attention. However, while they are important, they cannot be the main focus of the E.U. rules because they are only the tip of the iceberg.
Alex Hanna, director of research at the Distributed AI Research Institute, stresses the importance of looking beyond the current focus points and covering any tool that is potentially harmful, as it would send “a strong signal that the EU does not want to focus on models which are already causing significant harm”.
In conclusion, E.U. officials must make sure to adopt rules that are “expansive” enough to protect the public from the potential harms caused by AI tools, as well as be vigilant in their enforcement.
OpenAI is a non-profit artificial intelligence research lab based in San Francisco, California, which has become renown for the development of deep learning technologies for natural language processing.
Timnit Gebru is a research scientist at Google who specializes in artificial intelligence and its potential harms to society, particularly with regards to machine learning and large datasets. She is also an advocate of responsible ethical AI and was the leader of Google’s ethical Artificial Intelligence unit before her departure in December 2020. Her advocacy has made her one of the most recognizable figures in the tech industry and has helped to raise awareness of the potential issues and dangers associated with AI.