A group of prominent AI researchers are pushing for the European Union to make changes to its proposed regulations on artificial intelligence, arguing that direct regulations on tools like OpenAI’s ChatGPT should be put in place. The proposed AI Act initially proposed newrules for specific “high-risk” uses of the tech, such as in education or law enforcement, but did not cover “general purpose” AI, like the chatbot. Now, since technology companies are actively incorporating AI into everyday products, the AI scholars want the E.U. to include the tools like ChatGPT in the list of “high-risk” AI.
The brief, signed by former Google AI ethicist Timnit Gebru, Mozilla Foundation President Mark Surman and the AI Now Institute’s Amba Kak and Sarah Myers West, among dozens of others, calls for the E.U. to take a wider approach and cover all AI tools, warning against concentrating too much on specific ones, in order to avoid having to establish new rules. In addition, they proposed rules to ensure that original developers the models they create are held accountable by regulating the development stages.
In the U.S., lawmakers are just starting to explore the regulations of using AI, while in Europe, the proposal is further along, with some leaders trying to expand the restrictions for the tools. Though U.S. federal officials are stalled at the moment under discussions, in the meantime, states like Arkansas have started making actions of their own. Governor Sarah Huckabee Sanders (R) recently signed the Social Media Safety Act, requiring social media platforms to verify their users’ ages and having permission from guardians if the user is under 18.
Still, European commissioner Thierry Breton has ensured they are aware of the risks of using AI solutions such as ChatGPT, noting that people could get misinformation on an alarming scale. Change is surely in the works, although the proposal has met skepticism from right-leaning political groups in the E.U. Parliament.
The company OpenAI is a non-profit research laboratory, which was co-founded by Elon Musk, Sam Altman, Greg Brockman and Ilya Sutskever. It specializes in the development of advanced artificial intelligence and machine learning technologies, like the chatbot ChatGPT and DALL-E2. Tesla owner Elon Musk recently changed the BBC’s account’s “government funded” label, stirring controversy when it appeared that he personally labeled it that way, and also with the intentions of defunding NPR when the news outlet announced it will be leaving Twitter over claims that Musk inaccurately labeled their account.
Timnit Gebru is an Ethiopian-American renowned AI researcher, former Google principal scientist and founder of Black in AI. She has made pioneering work in the area of algorithmic fairness and bias. Mark Surman is the President of the Mozilla Foundation, a not-for-profit organization dedicated to helping Europe become a global leader in the digital world economy, open source innovation, and digital inclusion. Amba Kak and Sarah Myers West formerly held positions as advisers to Federal Trade Commission Chair Lina Khan. They wrote a report Tuesday calling for greater transparency and oversight in the consolidation of AI harms.
All in all, the E.U. is a step closer to establishing new regulations, but with skepiticism lingering, it remains to be seen how it will all ultimately play out. Meanwhile, in the states, more is being done to make sure children’s privacy is protected on the internet, but on a broader level that is still miles away from being enforced under federal law. AI is permeating into almost every aspect of modern life and its implications are continuously expanding, with the need to regulate and clean up the industry leading the way.