On March 31st 2023, the Italian Data Protection Authority(“DPA”) issued a temporary ban and launched an investigation into compliance with the General Data Protection Regulation (“GDPR”) when a data breach was found in the OpenAI-based platform ChatGPT. This breach happened on March 20th over a nine-hour period, exposing chat histories and payment-related information to other users. After the Italian DPA conducted an investigation, they noticed several issues with the way ChatGPT was operating, such as not being able to adequately establish a legal basis for collecting and processing large amounts of personal data, processing inadequate and inaccurate generating of personal data, and failing to implement proper age verification mechanisms for users that were under the age of 13.
OpenAI’s European representative had 20 days to report back to the Italian Supervisory Authority with corrective measures that were implemented or face potential fines of up to €20 million or 4% of their total worldwide annual turnover. OpenAI started conversations with the DPA to potentially lift the ban.
This incident further displays the need for AI regulation in the European Union, and the importance for developers to comply with GDPR when using AI in their operations. The UK’s Information Commissioner’s Office released a statement emphasizing the privacy implications of AI and providing guidance to ensure a legal basis for data processing activities within the scope of the GDPR.
OpenAI is a nonprofit artificial intelligence research company, primarily funded by tech entrepreneur Elon Musk. ChatGPT is an AI-powered chatbot built by OpenAI that can talk and understand human language. It is powered by OpenAI’s GPT-3, an advanced language processing system that can generate human-like text. It was released as an open source platform and is currently in beta.