OpenAI Faces Legal Trouble Over AI Hallucinations from Noyb Complaint
A privacy organization called Noyb has lodged a complaint against OpenAI with the Austrian Data Protection Authority (DPA), alleging that OpenAI’s product, ChatGPT, violates several data protection laws of the EU. The crux of the matter lies in the organization’s claim that ChatGPT disseminates inaccurate information about individuals, which goes against the EU’s General Data Protection Regulation (GDPR) requirements.
The GDPR stipulates that all information pertaining to individuals must be accurate, and individuals should have full access to and control over their personal data. Noyb alleges that when a famous public figure requested to correct false information about their birthday date and delete related data, OpenAI denied the request, citing an inability to rectify the inaccuracies stored in the ChatGPT model.
Noyb argues that while inaccuracies in AI models may be somewhat tolerable for academic purposes, they are unacceptable when it concerns personal data due to legal obligations. The organization raised concerns about OpenAI’s lack of transparency regarding the sources of information generated by ChatGPT.
OpenAI’s response to the issue raised eyebrows as it claimed that ensuring factual accuracy in large language models is an ongoing area of research. Noyb’s data protection lawyer underscored the potential consequences of AI-generated false information about individuals, stressing the importance of compliance with existing laws.
European privacy regulators have been closely monitoring generative AI tools, with past incidents leading to temporary restrictions on data protection. The outcome of Noyb’s complaint against OpenAI remains uncertain, particularly concerning the company’s commitment to GDPR compliance.
The legal wrangle between Noyb and OpenAI encapsulates the broader debate surrounding the accountability and ethical implications of AI technologies in safeguarding individuals’ privacy rights. As technology continues to evolve rapidly, the onus is on companies like OpenAI to navigate regulatory complexities and prioritize data accuracy and individual rights in their AI developments.