AI’s Threat to Journalism: Big Tech’s Decline & Misinformation Dangers
Artificial intelligence (AI) poses a significant threat to journalism, experts warned Congress at a recent hearing. During the hearing, media executives and academic experts testified about how AI is contributing to the decline of journalism, fueled by big tech companies. They also raised concerns about the dangers of AI-powered misinformation.
Senator Richard Blumenthal, the chair of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, emphasized that the rise of big tech has directly led to the decline of local news. He pointed out that companies like Meta, Google, and OpenAI are using the content created by newspapers and authors to train their AI models without giving credit or compensation. These models are then used by tech companies to compete with traditional journalistic institutions, leading to a loss of readership and revenue for news organizations.
The conflict between tech companies and the news industry has been ongoing since the rise of digital platforms. As a result, many news organizations have gone out of business while tech platforms continue to profit. Research conducted by the Medill School of Journalism at Northwestern University shows that since 2005, the United States has lost nearly a third of its newspapers and almost two-thirds of its newspaper journalists.
Countries worldwide are taking action to force big tech companies to support local journalism. Canada and Australia have passed laws requiring tech companies to pay news outlets for featuring their content on their platforms. In the United States, Senators Amy Klobuchar and John Kennedy have proposed similar legislation.
During the hearing, Danielle Coffey, the president and CEO of the News Media Alliance, highlighted the imbalance in the marketplace caused by the dominance of tech platforms. She argued that generative AI, which creates text, images, or other media, has been built using stolen goods. She called for congressional intervention to ensure that AI developers pay publishers for their content.
However, Curtis LeGeyt, the president and CEO of the National Association of Broadcasters, urged caution, stating that current copyright protections should apply. He also warned about the dangers of AI-generated misinformation and the burden it places on newsrooms to verify and authenticate content.
The controversy surrounding AI’s impact on journalism has led to several copyright lawsuits. The New York Times, comedian Sarah Silverman, and authors Christopher Golden and Richard Kadrey have all sued AI developers for using their work without permission. The issue extends beyond text-based content, as artists Kelly McKernan, Sarah Andersen, and Karla Orti have also sued companies that develop AI models capable of generating images.
The debate over whether legislation is needed to regulate AI and protect journalism continues. While some argue for immediate action, others believe that existing copyright laws are sufficient. The concern over AI-generated misinformation adds another layer to the discussion, highlighting the need for responsible use of AI technology in the news industry.
As the battle between journalists, news organizations, and big tech companies persists, the future of journalism and the role of AI in shaping it remain uncertain. It is crucial for policymakers, industry stakeholders, and tech developers to find a balance that ensures the survival of quality journalism while leveraging the benefits of AI technology.
Disclaimer: This article is based on AI-generated content. It does not include any additional messages indicating completion or adherence to guidelines.