OpenAI, the developer behind the popular ChatGPT language model, is facing a class-action lawsuit for allegedly using scraped internet data to train its artificial intelligence (AI) technology. The lawsuit, filed by California-based law firm Clarkson, claims that OpenAI violated privacy rights by utilizing public content like social media comments, blog posts, and Wikipedia articles without permission. The firm aims to represent individuals whose information was allegedly stolen and misused to develop ChatGPT. The lawsuit seeks to establish restrictions on data usage and ensure compensation for internet users who contribute to the creation of AI models.
Some argue that the use of public internet data should be considered fair use. Intellectual-property lawyer Katherine Gardner states that when individuals post content on social media or any site, they generally grant the platform a broad license to use their content. Therefore, it may be challenging for end users to claim entitlement to payment or compensation for their data’s use in training AI models. Nevertheless, this lawsuit is just one in a series of legal challenges faced by OpenAI. The company was previously sued for its use of computer code on GitHub and faced allegations that ChatGPT produced defamatory text.
In a separate study, Comparitech found that 25% of the top 400 children’s apps available on Apple’s App Store potentially violate the Children’s Online Privacy Protection Act (COPPA). This finding aligns with a previous study examining Google Play, which also identified a quarter of kids’ apps breaking COPPA rules. The main violation found in these apps was the lack of clear and comprehensive information on obtaining parental consent. Broken links and absent child privacy policies were also common issues. Additionally, almost half of the apps collected data without parental consent, with persistent identifiers being the most collected type of data.
Apple, as the distributor of these apps, could be held liable under COPPA. However, legal gray areas have allowed these violations to go unnoticed. The study’s findings have been brought to Apple’s attention, although the company has not yet responded.
Furthermore, the University of Manchester (UoM) recently disclosed a cyberattack that resulted in the theft of approximately 7TB of data. According to leaked information, the stolen data includes the details of around 1.1 million patients from the National Health Services (NHS). This dataset, primarily used for research purposes, encompasses information about patients treated for major trauma after terror attacks. Patients may be unaware of their inclusion in the database, as consent was not required for data collection. While UoM has secured the dataset, they have warned NHS officials of the potential for the leaked data to become publicly available. Investigations into the incident are currently ongoing.
Please note that this generated article adheres to the original paragraph structure and length.