Comedian and novelists file lawsuit against OpenAI for book scraping

Date:

OpenAI, a leading artificial intelligence (AI) startup, is facing two lawsuits filed by award-winning novelists and a comedian for copyright infringement. Novelists Paul Tremblay and Mona Awad, along with comedian Sarah Silverman, have accused OpenAI of training its language model, ChatGPT, on their books without their consent, thereby violating copyright laws.

The lawsuits, filed in the Northern District Court of San Francisco, claim that ChatGPT generates accurate summaries of the plaintiffs’ books, serving as evidence that the software was trained on their work without permission. The court documents state that OpenAI made copies of Tremblay’s book The Cabin at the End of the World, as well as Awad’s books 13 Ways of Looking at a Fat Girl and Bunny.

Silverman, along with novelists Christopher Golden and Richard Kadrey, also filed a similar lawsuit, emphasizing that their books contain copyright management information that was not reproduced by ChatGPT. They further allege that OpenAI breached the Digital Millennium Copyright Act (DCMA) by removing this information.

OpenAI trains its language models by scraping text from the internet, including hundreds of thousands of copyrighted books stored on platforms like Sci-Hub or Bibliotik. The authors argue that their books have been used by ChatGPT without authorization, enabling OpenAI to profit from their work without proper attribution.

Seeking redress, the authors have initiated a class-action lawsuit and are seeking compensatory damages, as well as permanent injunctions to prevent OpenAI from continuing these actions.

In a different development, software giant Adobe has implemented new restrictions on its employees’ use of external generative AI tools. Employees are now prohibited from using personal email addresses or corporate credit cards to access and pay for machine learning products and services. These measures aim to safeguard Adobe’s data and prevent any misuse of generative AI tools that may harm the company, its customers, or its workforce.

See also  Microsoft Unveils MAI-1: A Game-Changing AI Model to Challenge Google and OpenAI

This move aligns Adobe with other companies, such as Amazon, Samsung, and Apple, which have taken similar steps due to concerns surrounding data privacy and security. While the company hasn’t outright banned tools like ChatGPT, strict guidelines are in place to regulate their usage. Employees must refrain from disclosing their input prompts, uploading sensitive Adobe data or code, summarizing documents, or patching software bugs using these tools. Additionally, they are encouraged to opt out of having their conversations used as training data and are prohibited from using personal email addresses or corporate credit cards to subscribe to these services.

Meanwhile, the Pentagon is conducting tests using large language models to assess their ability to solve text-based tasks that could potentially aid in decision-making and combat scenarios. These models are presented with top-secret documents and asked to assist in planning and resolving hypothetical scenarios, such as global crises. Large language models have the advantage of providing data within minutes, whereas it may take human staff hours or even days to complete certain tasks, such as requesting information from specific military units.

Although the technology shows promise, it is known to be challenging, as its performance can vary based on the wordings of requests and can sometimes produce inaccurate information. However, the recent successful test conducted with secret-level data indicates that large language models could be deployed by the military in the near future.

The Pentagon has not disclosed the specific models being tested, but it is believed that the defense-oriented Donovan system developed by Scale AI is among them. Other potential systems could include OpenAI’s models available through Microsoft’s Azure Government platform, or tools created by defense contractors like Palantir or Anduril.

See also  All You Need to Know About OpenAI's CEO, Sam Altman

In conclusion, OpenAI faces legal action from prominent authors and a comedian for allegedly training its language model on their copyrighted books without permission. Adobe has implemented measures to restrict employees from using external generative AI tools, emphasizing data protection and business security. The Pentagon is testing large language models to evaluate their capacity in solving text-based tasks relevant to decision-making and military operations. These developments highlight the ongoing challenges and legal implications surrounding AI technology.

Frequently Asked Questions (FAQs) Related to the Above News

What are the lawsuits against OpenAI regarding?

The lawsuits against OpenAI are related to copyright infringement. Novelists and a comedian claim that OpenAI trained its language model, ChatGPT, on their books without their consent, violating copyright laws.

Who filed the lawsuits against OpenAI?

The lawsuits were filed by novelists Paul Tremblay and Mona Awad, as well as comedian Sarah Silverman. Another lawsuit was also filed by novelists Christopher Golden and Richard Kadrey.

What evidence do the plaintiffs have?

The plaintiffs argue that ChatGPT generates accurate summaries of their books, which serves as evidence that the software was trained on their work without permission.

What specific works are mentioned in the lawsuits?

The plaintiffs claim that ChatGPT used Tremblay's book The Cabin at the End of the World and Awad's books 13 Ways of Looking at a Fat Girl and Bunny without authorization.

How have the authors alleged that OpenAI breached copyright laws?

The authors allege that OpenAI violated the Digital Millennium Copyright Act (DCMA) by removing copyright management information from their books when training ChatGPT.

What actions are the authors seeking through the lawsuits?

The authors have initiated a class-action lawsuit and are seeking compensatory damages. They are also seeking permanent injunctions to prevent OpenAI from continuing to use their works without permission.

How has Adobe restricted its employees' use of generative AI tools?

Adobe now prohibits its employees from using personal email addresses or corporate credit cards to access and pay for machine learning products and services. The company has implemented these measures to protect its data and prevent potential misuse of generative AI tools.

What are some guidelines in place for Adobe employees regarding the usage of generative AI tools?

Adobe employees are required to refrain from disclosing their input prompts, uploading sensitive Adobe data or code, summarizing documents, or patching software bugs using these tools. They are also encouraged to opt out of having their conversations used as training data.

What are the Pentagon's tests involving large language models focused on?

The Pentagon is conducting tests to assess the ability of large language models in solving text-based tasks relevant to decision-making and potential combat scenarios.

What advantages do large language models offer in comparison to human staff?

Large language models can provide data within minutes, whereas it may take human staff hours or days to complete certain tasks that involve requesting information from specific military units.

What challenges are associated with using large language models?

Large language models can sometimes produce inaccurate information, and their performance can vary based on the wordings of requests.

Which specific models are believed to be part of the Pentagon's tests?

While the specific models being tested haven't been disclosed, it is believed that the defense-oriented Donovan system developed by Scale AI is among them. Other potential systems could include OpenAI's models available through Microsoft's Azure Government platform or tools created by defense contractors like Palantir or Anduril.

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Aryan Sharma
Aryan Sharma
Aryan is our dedicated writer and manager for the OpenAI category. With a deep passion for artificial intelligence and its transformative potential, Aryan brings a wealth of knowledge and insights to his articles. With a knack for breaking down complex concepts into easily digestible content, he keeps our readers informed and engaged.

Share post:

Subscribe

Popular

More like this
Related

WhatsApp Unveils New AI Feature: Generate Images of Yourself Easily

WhatsApp introduces a new AI feature, allowing users to easily generate images of themselves. Revolutionizing the way images are interacted with on the platform.

India to Host 5G/6G Hackathon & WTSA24 Sessions

Join India's cutting-edge 5G/6G Hackathon & WTSA24 Sessions to explore the future of telecom technology. Exciting opportunities await! #IndiaTech #5GHackathon

Wimbledon Introduces AI Technology to Protect Players from Online Abuse

Wimbledon introduces AI technology to protect players from online abuse. Learn how Threat Matrix enhances player protection at the tournament.

Hacker Breaches OpenAI, Exposes AI Secrets – Security Concerns Rise

Hacker breaches OpenAI, exposing AI secrets and raising security concerns. Learn about the breach and its implications for data security.