Regulatory pressure on artificial intelligence (AI) is mounting, leading to new lawsuits targeting OpenAI, a prominent AI research lab. The lawsuits claim that OpenAI violated state and federal copyright and privacy laws when collecting data to train its language models, including ChatGPT. The lawsuits allege that OpenAI scraped personal data from various sources, such as Snapchat, Spotify, Slack, and even the health platform MyChart.
In addition to privacy concerns, the lawsuits also argue that OpenAI has infringed upon copyright laws, which remains a legal grey area in relation to AI. The lawsuits raise concerns about the rapid advancement of AI and the potential negative consequences if these issues are not addressed promptly. One of the law firms involved, Clarkson Law Firm, is actively seeking more plaintiffs to join the class-action case and has created a website where individuals can share their experiences with AI products from OpenAI and other companies.
OpenAI has not responded to requests for comment, but its privacy policy states that the company does not sell or share personal information for cross-contextual advertising and does not knowingly collect personal information from children under 13. The lawsuits argue that OpenAI violated privacy laws by collecting and sharing data for advertising purposes, particularly targeting minors and vulnerable individuals with predatory advertising and algorithmic discrimination.
These legal challenges arise as the AI industry faces increased scrutiny and calls for regulation. The US Federal Trade Commission recently expressed concerns about generative AI and its impact on competition. The European Union is also in the process of proposing regulations for AI through the AI Act, which has prompted executives from over 150 companies to voice concerns about potential inefficacy and harm to competition. Lawmakers in the US are also exploring AI regulation.
While the legal landscape for AI remains uncertain, more marketers are recognizing the potential of AI to impact various aspects of business. However, caution is advised, and companies are being advised to experiment responsibly to avoid copyright infringement and plagiarism issues. Some AI startups, like Israel-based Bria, choose to train their AI tools only with content for which they have proper licenses, despite the higher cost associated with this approach.
As the legal system catches up to the AI industry, companies are expected to act more responsibly in response to market demands, forcing them to prioritize ethical and legal considerations. While models may no longer be the differentiator, the quality and proper usage of data remain crucial. The outcome of these lawsuits and the ensuing regulatory developments will shape the future of AI and its impact on society.