On December 27, 2023, The New York Times filed a lawsuit against OpenAI, accusing the company of willful copyright infringement related to its AI tool ChatGPT. According to The Times, ChatGPT was trained using unauthorized text from their articles, and its output contained language directly copied from their content. Seeking more than just monetary compensation, The Times requested a federal court to order the destruction of ChatGPT, which would involve deleting OpenAI’s trained large language models and the associated training data.
Many of the 100 million weekly users of ChatGPT expressed concern over this possibility, prompting questions about whether a court could indeed order the destruction of the AI tool and whether it would actually do so. As a law professor, I find these questions intriguing. Firstly, it is important to note that under copyright law, courts do possess the authority to issue destruction orders. This can be witnessed in cases involving counterfeit vinyl records. In such instances, the courts have the power to destroy infringing goods and the equipment used in their production, as there is no legal use for pirated records, nor a valid reason for counterfeiters to retain the materials used for their production.
While copyright law has never been employed to eliminate AI models before, recent trends suggest that courts may become more receptive to using it against AI. For example, the Federal Trade Commission has compelled companies to delete unlawfully collected data, as well as the algorithms and AI models trained on that data, through a process called algorithmic disgorgement.
However, despite the potential legal avenue for ordering the destruction of ChatGPT, I believe that is unlikely to occur in this particular case. I see three more plausible outcomes. Firstly, the two parties may reach a settlement, leading to the dismissal of the lawsuit and no destruction order. Secondly, the court could side with OpenAI, acknowledging that ChatGPT is protected by the fair use doctrine in copyright law. If OpenAI can establish that ChatGPT is transformative and does not substitute The New York Times’ content, they may prevail.
Alternatively, even if OpenAI was to lose the case, it is possible that the law would find a way to save ChatGPT. Destruction orders are contingent upon two requirements: the destruction must not impede lawful activities, and it should be the only remedy to prevent further copyright violations. OpenAI could potentially demonstrate that ChatGPT has legitimate, noninfringing uses or prove that destroying it is unnecessary to prevent future copyright violations.
Considering all these possibilities, it appears highly improbable that any court would order the destruction of ChatGPT and its training data. However, it is crucial for developers to be aware that courts possess the authority to destroy unlawful AI, and there is an increasing inclination to exercise this power.