Experts Say It’s ‘Next To Impossible’ for OpenAI to Comply with EU Laws by April 30th

Date:

OpenAI may soon face its biggest regulatory challenge to date – deadlined by Italian authorities to comply with data protection and privacy laws by April 30th. This has AI experts saying the task at hand is almost impossible.

In late March, Italy began its wide-reaching ban on OpenAI’s GPT products – becoming the first Western nation in the world to do so. ChatGPT and GPT API customers were left vulnerable due to a data breach issue that had been exposed.

The Italian demand to OpenAI to follow through involves age-verification measures in order to ensure that the technology they are offering follow terms of service and users are all over the age of 13. This also means that OpenAI must be able to prove that it acquired user data lawfully.

The EU’s General Data Protection Regulation (GDPR) states that consent must be given before personal data can be connected with AI performance. OpenAI must also provide Europeans with the right to opt-out of any data collection involved in its models.

As AI models are trained with rather massive data troves, often scraped from the internet, it would be close to impossible for technicians to pinpoint individual pieces of data. This means meeting compliance in Europe will be incredibly challenging for OpenAI.

MIT’s Technology Review spoke with Margaret Mitchell, an AI ethics expert, on the matter, who believes that “OpenAI is going to find it near-impossible to identify individuals’ data and remove it from its models.” Lilian Edwards, an internet law professor at Newcastle University, also made a statement – stating that the court of justice of the European Union may be involved due to the “flagrant violations.”

See also  OpenAI Acquires Multi for ChatGPT Team Enhancement

If OpenAI are not able to satisfy their April 30th deadline, it is possible their products will be disallowed from operating in Italy. OpenAI’s current situation is regarded as a pivotal point for the entire tech industry, as the vision becomes clearer for how regulatory AI should be managed.

OpenAI is an artificial intelligence research laboratory based in San Francisco. It focuses on developing technologies to ensure its AI strategies are used safely.

MIT’s Technology Review also interviewed Margaret Mitchell in relation to this article. She has had a long career in data privacy and ethical AI, holding positions such as Principal Research Engineer at Google AI. Mitchell is also the co-founder and director of the Human-Centered AI Institute, a non-profit organization striving to ensure that ethical standards are adhered to concerning AI development.

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

WooCommerce Revolutionizes E-Commerce Trends Worldwide

Discover how WooCommerce is reshaping global e-commerce trends and revolutionizing online shopping experiences worldwide.

Revolutionizing Liquid Formulations: ML Training Dataset Unveiled

Discover how researchers are revolutionizing liquid formulations with ML technology and an open dataset for faster, more sustainable product design.

Google’s AI Emissions Crisis: Can Technology Save the Planet by 2030?

Explore Google's AI emissions crisis and the potential of technology to save the planet by 2030 amid growing environmental concerns.

OpenAI’s Unsandboxed ChatGPT App Raises Privacy Concerns

OpenAI's ChatGPT app for macOS lacks sandboxing, raising privacy concerns due to stored chats in plain text. Protect your data by using trusted sources.