Europe is sounding the alarm over ChatGPT, a generative artificial intelligence (AI) platform that allows users to submit queries in the form of essays, poems, spreadsheets and computer code. Since December it has garnered over 1.6 billion visits, leading Europol, the European Union Agency for Law Enforcement Cooperation, to warn of the possible malicious use of the platform, from facilitating cybercrime to assisting in terrorist activities.
In light of these concerns, various authorities across the continent have acted, with Italy placing a temporary ban on the program, citing privacy violations. The Italian privacy board Garante threatened OpenAI, the platform’s creator, with harsh penalties if problems like age restrictions and user data protection could not be resolved. Spain, France and Germany have also investigated potential breaches of user privacy, while the European Data Protection Board has created a task force to ensure regulations are consistent throughout the European Union.
These event have prompted EU politician Dragos Tudorache to call for a “wake up call in Europe” in response to such concerns. This has led to recently proposed legislation, aiming to establish an AI authority, being rushed through the European Parliament.
OpenAI, Inc. is a San Francisco-based artificial intelligence research laboratory that was established in 2015 and is managed by a partnership of technology heavyweights including Microsoft, Alibaba Group and Amazon Web Services. OpenAI builds solutions that help organizations solve complex problems using artificial intelligence and offers a massive AI-based platform, ChatGPT, that enables users to ask questions and receive responses in real-time.
Dragos Tudorache is an EU legislator who co-sponsored the recently proposed Artificial Intelligence Act. The Act aims to provide a framework for developing efficient regulatory solutions that ensure the use of AI within the EU respects ethical and legal standards. He has called upon Europe to sound the alarm on ChatGPT in order to rein in its use to ensure malicious intent is limited.