OpenAI may soon face its biggest regulatory challenge to date – deadlined by Italian authorities to comply with data protection and privacy laws by April 30th. This has AI experts saying the task at hand is almost impossible.
In late March, Italy began its wide-reaching ban on OpenAI’s GPT products – becoming the first Western nation in the world to do so. ChatGPT and GPT API customers were left vulnerable due to a data breach issue that had been exposed.
The Italian demand to OpenAI to follow through involves age-verification measures in order to ensure that the technology they are offering follow terms of service and users are all over the age of 13. This also means that OpenAI must be able to prove that it acquired user data lawfully.
The EU’s General Data Protection Regulation (GDPR) states that consent must be given before personal data can be connected with AI performance. OpenAI must also provide Europeans with the right to opt-out of any data collection involved in its models.
As AI models are trained with rather massive data troves, often scraped from the internet, it would be close to impossible for technicians to pinpoint individual pieces of data. This means meeting compliance in Europe will be incredibly challenging for OpenAI.
MIT’s Technology Review spoke with Margaret Mitchell, an AI ethics expert, on the matter, who believes that “OpenAI is going to find it near-impossible to identify individuals’ data and remove it from its models.” Lilian Edwards, an internet law professor at Newcastle University, also made a statement – stating that the court of justice of the European Union may be involved due to the “flagrant violations.”
If OpenAI are not able to satisfy their April 30th deadline, it is possible their products will be disallowed from operating in Italy. OpenAI’s current situation is regarded as a pivotal point for the entire tech industry, as the vision becomes clearer for how regulatory AI should be managed.
OpenAI is an artificial intelligence research laboratory based in San Francisco. It focuses on developing technologies to ensure its AI strategies are used safely.
MIT’s Technology Review also interviewed Margaret Mitchell in relation to this article. She has had a long career in data privacy and ethical AI, holding positions such as Principal Research Engineer at Google AI. Mitchell is also the co-founder and director of the Human-Centered AI Institute, a non-profit organization striving to ensure that ethical standards are adhered to concerning AI development.