Media experts are engaging in a debate over the ethical adoption of artificial intelligence (AI) in journalism, particularly amidst concerns about misinformation. With the rise of advanced AI technology and its potential to spread mis- and disinformation, there is a consensus among newsroom leaders that uniform standards need to be established in the industry’s use of this technology.
One of the key questions being raised is how AI, which has been prone to errors and digital hallucinations, can be relied upon ethically in an industry where trust and credibility are paramount. Jared Schroeder, an associate professor specializing in media law and technology, believes that news organizations need time to develop best practices, as AI is continuously evolving and changing.
While generative AI offers significant advantages, such as assisting in producing transcripts, editing copy, narrating audio, creating images, and aiding investigative news outlets in data analysis, there are also risks involved. These risks include copyright violations, plagiarism, and potential errors.
Recent incidents highlight some of the challenges associated with AI in journalism. The New York Times recently filed a lawsuit against Open AI and Microsoft for copyright infringement, and an investigative report found AI chatbots plagiarizing thousands of news articles. These incidents underscore the need for news organizations to establish standards and practices for AI usage.
While many experts agree that AI can be a valuable tool for journalism, they also emphasize the importance of human oversight. Journalists, like Ryan Heath from Axios, acknowledge the role of AI in research and inspiration, but stress that AI cannot replace the actual reporting and drafting of articles.
Attempts by news outlets to replace reporters with AI have had mixed results. Sports Illustrated faced accusations of publishing AI-generated content under fake bylines, although the outlet denied the allegations and attributed the content to a third party. In a similar vein, CNET’s experiment with AI led to numerous articles containing errors, eventually prompting the site to stop publishing articles entirely created by AI.
Given the potential benefits and risks of AI in journalism, media organizations are adopting diverse approaches. Axios has taken a cautious approach, hiring journalists specializing in AI to write about the topic. The New York Times has appointed an editorial director to establish principles for AI usage. Some organizations, like The Associated Press, have signed licensing agreements with AI developers to navigate these challenges and leverage the advantages of AI.
The implications of AI for newsrooms are a prominent trend for 2024, particularly in a year with significant elections in more than 40 countries. Newsroom leaders, media watchdogs, and Nobel laureate Maria Ressa have called for ethical considerations in AI usage through the Paris Charter on AI and Journalism. However, the adoption of the charter by news organizations has been limited, indicating a diversity of perspectives on AI implementation.
The potential impact of AI on journalism has also caught the attention of governments. A U.S. Senate subcommittee highlighted concerns about AI’s influence on declining revenue and the spread of disinformation. Likewise, the European Union passed the Artificial Intelligence Act to ensure safe and transparent AI usage, including requirements for tech companies to disclose AI-generated content.
As discussions about AI in journalism continue, experts emphasize the need for transparency and human oversight. While AI can enhance efficiency and relevancy in news organizations, ethical considerations and the preservation of trust remain critical. The careful and thoughtful adoption of AI technology remains an ongoing challenge for the journalism industry.