ChatGPT, an artificial intelligence (AI) chatbot developed by OpenAI, is facing fresh demands from Italian data protection regulators to equip users in the country with access to “right to be forgotten” tools. This is after the AI chatbot was banned in the country earlier this month.
Under the new rules, Italian users of ChatGPT could demand that false information generated through user prompts be changed. The “right to be forgotten” policy, which preceded the EU General Data Protection Regulation (GDPR), enables individuals to request the deletion of their personal data from the web.
The Italian data protection regulator (SA) stated that OpenAI, ChatGPT’s developer, should provide users with “easily accessible tools” for exercising the right to object to their data being processed and used by the AI chatbot’s algorithms.
OpenAI has been given a deadline of April 30th to meet the wide-ranging requirements set out by the SA such as changes to data processing transparency, the rights of data subjects and legal basis for the processing of data for algorithmic training, as well as safeguarding minors. These requirements echo the data privacy rights established under GDPR, including the right to withdraw consent, the “right to rectification” and the “right to be forgotten”.
OpenAI’s decision to exclude ChatGPT in the country sparked criticism from experts. However, the Italian authorities have been in discussion with the San Francisco-based company in order to re-establish the AI chatbot in the country, whilst putting in necessary safeguards.
OpenAI is a San Francisco-based artificial intelligence laboratory established by Elon Musk and Sam Altman, among other early investors. Launched in late 2019, ChatGPT was its first publicly released AI product. It was developed by OpenAI’s research team and relies on large datasets from the web to train AI models.
Recently, ChatGPT has been in the spotlight for generating false information, as was highlighted when a mayor in Australia tried to issue a legal threat for wrongly having been declared as imprisoned for bribery. This posed a huge question mark over how language models such as ChatGPT may expose users to privacy risks.
Responding to this, the US authorities have called public consultations to explore the potential accountability measures to ensure responsible use of AI systems like ChatGPT. This could potentially lead to the development of future legislations to regulate AI applications.
Italian regulators’ demands of ChatGPT reflects their emphasis on data privacy rights and to protect their citizens from such risks. OpenAI has been given a timeline of a month to adhere to these standards. The implementation of these policies will determine whether AI chatbot can continue to operate in Italy in the long-run.