OpenAI, a leading artificial intelligence (AI) startup, is facing two lawsuits filed by award-winning novelists and a comedian for copyright infringement. Novelists Paul Tremblay and Mona Awad, along with comedian Sarah Silverman, have accused OpenAI of training its language model, ChatGPT, on their books without their consent, thereby violating copyright laws.
The lawsuits, filed in the Northern District Court of San Francisco, claim that ChatGPT generates accurate summaries of the plaintiffs’ books, serving as evidence that the software was trained on their work without permission. The court documents state that OpenAI made copies of Tremblay’s book The Cabin at the End of the World, as well as Awad’s books 13 Ways of Looking at a Fat Girl and Bunny.
Silverman, along with novelists Christopher Golden and Richard Kadrey, also filed a similar lawsuit, emphasizing that their books contain copyright management information that was not reproduced by ChatGPT. They further allege that OpenAI breached the Digital Millennium Copyright Act (DCMA) by removing this information.
OpenAI trains its language models by scraping text from the internet, including hundreds of thousands of copyrighted books stored on platforms like Sci-Hub or Bibliotik. The authors argue that their books have been used by ChatGPT without authorization, enabling OpenAI to profit from their work without proper attribution.
Seeking redress, the authors have initiated a class-action lawsuit and are seeking compensatory damages, as well as permanent injunctions to prevent OpenAI from continuing these actions.
In a different development, software giant Adobe has implemented new restrictions on its employees’ use of external generative AI tools. Employees are now prohibited from using personal email addresses or corporate credit cards to access and pay for machine learning products and services. These measures aim to safeguard Adobe’s data and prevent any misuse of generative AI tools that may harm the company, its customers, or its workforce.
This move aligns Adobe with other companies, such as Amazon, Samsung, and Apple, which have taken similar steps due to concerns surrounding data privacy and security. While the company hasn’t outright banned tools like ChatGPT, strict guidelines are in place to regulate their usage. Employees must refrain from disclosing their input prompts, uploading sensitive Adobe data or code, summarizing documents, or patching software bugs using these tools. Additionally, they are encouraged to opt out of having their conversations used as training data and are prohibited from using personal email addresses or corporate credit cards to subscribe to these services.
Meanwhile, the Pentagon is conducting tests using large language models to assess their ability to solve text-based tasks that could potentially aid in decision-making and combat scenarios. These models are presented with top-secret documents and asked to assist in planning and resolving hypothetical scenarios, such as global crises. Large language models have the advantage of providing data within minutes, whereas it may take human staff hours or even days to complete certain tasks, such as requesting information from specific military units.
Although the technology shows promise, it is known to be challenging, as its performance can vary based on the wordings of requests and can sometimes produce inaccurate information. However, the recent successful test conducted with secret-level data indicates that large language models could be deployed by the military in the near future.
The Pentagon has not disclosed the specific models being tested, but it is believed that the defense-oriented Donovan system developed by Scale AI is among them. Other potential systems could include OpenAI’s models available through Microsoft’s Azure Government platform, or tools created by defense contractors like Palantir or Anduril.
In conclusion, OpenAI faces legal action from prominent authors and a comedian for allegedly training its language model on their copyrighted books without permission. Adobe has implemented measures to restrict employees from using external generative AI tools, emphasizing data protection and business security. The Pentagon is testing large language models to evaluate their capacity in solving text-based tasks relevant to decision-making and military operations. These developments highlight the ongoing challenges and legal implications surrounding AI technology.