Meta, formerly known as Facebook, is facing scrutiny from the European Union over its use of personal data for training artificial intelligence models without user consent. The advocacy group NOYB has criticized Meta’s plan to utilize years of personal posts, images, and online tracking data for AI technology, prompting privacy authorities across Europe to intervene.
Recent changes in Meta’s privacy policy, effective June 26, have raised concerns as they would allow the company to use personal information without explicit user consent. In response, 11 complaints have been filed in various European countries, urging data protection authorities to take urgent action.
While Meta argues that it uses publicly available and licensed information for AI training, NOYB contends that the company’s approach violates the EU’s General Data Protection Regulation (GDPR). Founder Max Schrems highlighted that obtaining opt-in consent from users is a legal requirement, rather than providing a concealed opt-out option.
Despite Meta’s assertion that its practices align with privacy laws and industry standards, critics point to a previous European Court of Justice ruling against the company’s claim of legitimate interest for advertising purposes. The complex opt-out process and reliance on user data for AI development have further fueled the debate over privacy rights and data protection.
As the regulatory landscape continues to evolve, Meta’s data practices are under increasing scrutiny, with calls for greater transparency, accountability, and user consent in the use of personal information for AI technologies. The outcome of ongoing investigations and regulatory actions may have far-reaching implications for how tech companies handle data privacy in the digital age.