Italy’s data protection agency, after initially hesitating, has reversed their stance and lifted their ban on the AI chatbot, ChatGPT. This change in position comes after OpenAI, ChatGPT’s creator, provided assurances that the chatbot would adhere to all regulatory policies for the protection of users’ personal data and the safeguarding of minors.
Two weeks prior, Italy was the first Western nation to impose a ban on ChatGPT after security concerns arose following the exposure of users’ financial details and conversations through the chatbot. After this, regulators felt the need to evaluate the compatibility of ChatGPT with Europe’s General Data Protection Regulation (GDPR) and Italy’s own privacy policies.
The worry over protecting minors was exacerbated by the large language model and the fear that ChatGPT would create false, biased, or even toxic content, putting young users at risk. France and Canada also received complaints, and Spain even asked the European Union’s GDPR watchdog to assess ChatGPT’s privacy protocols.
In the wake of this situation, Dr Ilia Kolochenko, Founder of ImmuniWeb and a member of the Europol Data Protection Experts Network, provided his views on the rules and regulations that are currently in place for artificial intelligence (AI). According to him, privacy protection is just a small part of the challenges that AI, such as ChatGPT, may come up against.
He believes regulators from numerous countries have started working on laws and regulations to ensure non-discrimination, explainability, transparency, and fairness relating to AI technologies. Dr Kolochenko points out that the Federal Trade Commission in the United States and the Cyberspace Administration of China are particularly active in this space.
One of the major points of contention with AI is the large amount of training data that the vendors collect and use without the content creators’ permission. This type of large-scale data-scraping often violates the terms of service of digital resources – and may even lead to civil and criminal penalties under certain jurisdictions’ unfair competition laws.
According to Dr Kolochenko, banning AI is not the right course of action, as malicious actors will simply continue to use the technology, putting them at a competitive advantage worldwide. An example of this is the ongoing legal battle surrounding ChatGPT and Brian Hood, a mayor in Australia, who has accused the chatbot of making false claims about him and is now suing for defamation.
OpenAI is a research laboratory with a mission to ensure that artificial general intelligence benefits all of humanity. Founded in 2015 by tech visionaries Elon Musk, Sam Altman, Greg Brockman, and Ilya Sutskever, OpenAI focuses on AI research and development. They are committed to open source, building on existing technologies in order to both reduce barriers to innovation and accelerate the development of beneficial AGI.