Title: California: Class Action Lawsuit Targets ChatGPT’s Alleged Breach of Privacy
A class action lawsuit has been launched in California against Microsoft and OpenAI, raising allegations of privacy violations in their AI products. The lawsuit, which is currently ongoing, seeks a staggering $3 billion in damages. The plaintiffs, who have chosen to remain anonymous, claim that AI products utilizing ChatGPT software have collected and disclosed personal information without appropriate transparency and notification. Additionally, they assert that OpenAI unlawfully conducted these actions without obtaining consent and in violation of privacy protection laws.
Microsoft finds itself named as a defendant in the lawsuit due to its integration of OpenAI’s technologies into many of its software and services, such as Azure OpenAI Service, Microsoft Teams, and Bing.
The class action contends that these companies have been utilizing AI products to collect, store, track, share, and disclose the personal information of millions of internet users. The scope of this allegedly illicit harvesting includes product information, account details, names, contact information, payment methods, and more. Furthermore, the plaintiffs claim that the companies have been using the acquired data for training AI technology, posing an additional risk of personal information exposure.
This legal action also highlights other companies employing AI products, including Reddit, and raises concerns about the potential privacy risks associated with the use of various AI tools.
The lawsuit in California follows a series of lawsuits and regulatory enforcement actions against ChatGPT in the European Union and Israel. In April, the European Data Protection Board established a task force to examine potential legal actions against ChatGPT. The Italian and Spanish data protection authorities subsequently launched investigations into OpenAI, citing concerns over privacy breaches and alleged violations of the European General Data Protection Regulation resulting from the use of ChatGPT. As a response, Italy temporarily blocked ChatGPT.
In Israel, a motion was filed with the Lod District Court to authorize a class action lawsuit, asserting that ChatGPT violated Israeli privacy protection laws by misusing user information. The motion specifically highlighted issues related to transparency, duration of information retention, information processing purposes, and disclosure to third parties.
These class actions, alongside law enforcement proceedings across countries, underscore the imperative for companies to meticulously control privacy protection and data security aspects when developing and utilizing AI technologies. By doing so, companies can avoid regulatory scrutiny and mitigate the risks associated with class action lawsuits.
It is crucial for organizations to prioritize privacy protection and data security in light of the European Parliament’s draft AI Act, as well as the mentioned class actions and regulatory actions taken against OpenAI in various jurisdictions. This serves as a strong reminder of the need to tread carefully in the rapidly evolving landscape of AI technology to safeguard user privacy and adhere to stringent data protection regulations.