Article:
ChatGPT, the Artificial Intelligence (AI) software developed by OpenAI, is facing increased scrutiny from data protection authorities worldwide. With regulators in the UK and EU already closely monitoring how AI technology complies with the General Data Protection Regulation (GDPR), it is interesting to observe their response to organizations like OpenAI that utilize AI in their products.
Regulators in the UK and EU are dedicating resources to understanding AI as a developing field. For instance, the UK Information Commissioner’s Office (ICO) has published a guide to AI and a risk toolkit to assist organizations in navigating this complex area of risk.
One recent development involving ChatGPT is its suspension in Italy. The Italian data protection authority, the Garante, issued an Order on March 30th, 2023, after investigating a reported data breach affecting ChatGPT users. The Garante determined that OpenAI’s processing activities violated various GDPR Articles. As a result, the Garante imposed an immediate temporary limitation on processing, applying to all personal data of Italian users. OpenAI had a 20-day deadline to respond to the Garante, detailing the measures taken to address the breaches. Failure to respond could lead to an administrative fine of up to €20 million or 4% of global turnover.
On April 28th, 2023, the Garante confirmed that OpenAI had notified them of the measures taken, allowing OpenAI to resume operations and the processing of Italian users’ data. However, the Garante stated that it would continue its fact-finding activities regarding OpenAI through the European Data Protection Board’s task force.
Following the Italian developments, other supervisory authorities also began closely monitoring ChatGPT and AI services in general. The HBDI in Hesse, Germany, shared concerns similar to those of the Garante and requested further information from OpenAI. Additionally, the LfDI in Baden-Württemberg approached OpenAI for comment.
These actions highlight the need to fully assess OpenAI’s compliance with data protection laws, considering the purposes of processing and the data that feeds the AI algorithm’s knowledge. Regulators are particularly concerned about questions assigned to ChatGPT that may reveal personal information about individuals, including their political, religious, ideological, or scientific interests, as well as their family or sexual life situations.
The Spanish data protection authority, the AEPD, initiated a preliminary investigation into OpenAI for possible GDPR violations on April 13th, 2023.
In Germany, the HBDI seeks to gather similar information as the Garante and the UK ICO. On June 1st, 2023, the HBDI issued a questionnaire to OpenAI, aiming to determine compliance with German and European data protection laws regarding ChatGPT’s data processing. The HBDI Commissioner indicated that if ChatGPT fails to adequately safeguard users’ fundamental rights and data protection, the authority has effective tools to respond. OpenAI is required to submit its answers no later than June 30th, 2023.
Based on information from the Hessen Commissioner, it appears that there may be a coordinated response to ChatGPT from German supervisory authorities or the European Data Protection Board. The objective is to demand the same level of data protection from American AI providers as European providers.
The Dutch Data Protection Authority (Autoriteit Persoonsgegevens) expressed concerns about how organizations using generative AI, such as ChatGPT, handle personal data. The AP emphasized that ChatGPT, being trained with data from the internet and user questions, may contain sensitive and highly personal information. Furthermore, the content generated by ChatGPT may be outdated, inaccurate, inappropriate, or offensive. The AP aims to clarify how personal data is handled during the training of AI systems.
As ChatGPT garners significant attention from the media and a wide audience, it remains at the forefront of data protection regulators’ agendas. AI regulation is still a developing landscape, and organizations must ensure compliance with data protection obligations and actively demonstrate this to regulators. As individuals become more aware of the data collected and used when interacting with AI technologies, both regulators and data subjects are likely to engage more actively in this field.
The interactions between regulators, particularly throughout the EU, will be closely monitored. Regulators are investing significant resources in investigating AI technologies, indicating that more commentary and potentially criticism or learning points for organizations can be expected in the coming months.