New York Times is facing a lawsuit from OpenAI over allegations of using ‘duplicative language’ in their articles. The irony lies in the fact that while the renowned newspaper defended Harvard’s Claudia Gay for similar reasons, they seem to have a different stance when it comes to their own content.
Claudia Gay, who serves as a leader and board member of Harvard Corp., recently dismissed concerns regarding anti-Semitism, saying that there is a lack of protection against hate speech and violence targeting Jewish people. This sparked scrutiny over Gay’s position, considering she oversees a staggering $60 billion and has a vote in her own hiring or firing.
The broader community began examining the scholarly work of this relatively unknown activist-turned-leader. What they discovered was an alarming amount of plagiarism. However, the New York Times promptly rushed to defend Claudia Gay, opting to reframe the issue as mere duplicative language. This contrasted with their previous stance when Joe Biden was accused of stealing British politician Neil Kinnock’s life story for a speech. In that case, the Times labeled it as plagiarism, even though it involved printed scholarship.
This double standard becomes even more perplexing as the New York Times is now suing OpenAI and Microsoft for utilizing articles, some of which were written by the Times itself, to train their Language Models. The Times, a $8 billion corporation, has extensively employed AI for generating stories that they subsequently copyright. One could argue that their current legal action against OpenAI and Microsoft for alleged plagiarism is contradictory and hypocritical.
The newspaper’s motive behind this lawsuit appears to be a quick payout, as evidenced by its choice of legal representation. Susman Godfrey, the law firm hired by the Times, successfully secured an almost $800 million settlement from Fox News in a defamation case involving Dominion Voting Systems. Additionally, the inclusion of Microsoft in the suit is questionable since the tech giant is merely an investor in OpenAI.
Despite the apparent hypocrisy, the New York Times, armed with a century-old law, is likely to emerge victorious. Jurors, often influenced by outdated perspectives when it comes to technology and law, may perceive any duplicative language as seen in Claudia Gay’s work as plagiarism. This puts the spotlight on OpenAI, not Claudia Gay, as they become the defendant in this legal battle.
The expected outcome seems to be a settlement. OpenAI cannot afford the risk of the New York Times pleading the alternative. While the Times insists that duplicative language in their own articles is not plagiarism, the same argument does not extend to OpenAI’s articles. In the eyes of the jury, common sense rather than hypocrisy will likely prevail.
As the story unfolds, it raises questions about the ethical implications of plagiarism and the use of AI in journalism. The clash between these two prominent entities highlights the need for clarity and consistency when it comes to issues of language duplication and copyright infringement.
In conclusion, the New York Times finds itself embroiled in a lawsuit initiated by OpenAI over allegations of using duplicative language. The irony lies in the fact that the Times previously defended Claudia Gay against similar accusations. The outcome of this legal battle remains uncertain, but it serves as a reminder of the ongoing challenges surrounding plagiarism, copyright, and the role of AI in journalism.