FTC Investigates OpenAI’s ChatGPT for Misinformation as 2024 Elections Approach
The Federal Trade Commission (FTC) has recently taken action concerning OpenAI’s ChatGPT, an advanced language model, amid concerns over the spread of misinformation leading up to the 2024 elections. In a 20-page letter addressed to OpenAI, the FTC requested information about the company’s data security practices and its approach to addressing potential risks associated with artificial intelligence (AI).
The FTC’s probe, led by Chair Lina Khan, delves into various aspects of OpenAI’s operations. These include marketing efforts, AI model training practices, and the safeguarding of users’ personal information. The agency holds the authority to regulate unfair and deceptive business practices, making this investigation a significant step toward ensuring responsible AI use.
Notably, the FTC is also investigating whether ChatGPT’s dissemination of false or misleading statements about individuals has caused reputational harm to consumers. To gain insight into the impact of OpenAI’s practices, the agency has asked the company to provide a detailed account of any complaints received regarding ChatGPT’s spread of misinformation and harmful statements.
Sam Altman, CEO of OpenAI, acknowledges society’s concerns regarding the influence of AI on our lives. While sharing the excitement of AI advancements, Altman recognizes the need to address the apprehension surrounding its potential impact. He emphasizes OpenAI’s commitment to collaborating with the FTC in resolving any issues.
The investigation follows warnings from experts about the dangers posed by AI-generated disinformation. Gary Marcus, a machine learning specialist, testified before Congress, emphasizing the potential for AI to create persuasive lies at an unprecedented scale, endangering democracy itself. Marcus draws attention to the devastating impact of social media, indicating that AI’s influence could surpass it.
One specific concern is the personalized disinformation that AI can generate. Researchers from Google, MIT, and Harvard have found that large language models, like ChatGPT, can accurately predict public opinion based on media consumption patterns. This ability allows corporate, government, or foreign entities to manipulate voters’ actions by crafting tailored strategies.
During a congressional hearing, Subcommittee Chair Richard Blumenthal demonstrated how easily voters can be deceived by playing an audio clip of an AI-generated voice imitating his own. He highlighted the risk of deceptive information swaying public opinion, stressing the need for users to exercise their own critical thinking.
Altman expressed concerns that users may become increasingly reliant on AI-generated answers without verifying their accuracy, potentially undermining the truth and democracy. He recognizes the importance of users having their own discriminating thought process, even as AI models continue to improve.
OpenAI has stated its commitment to maintaining technology that is safe and pro-consumer, and assures compliance with the law. Altman has pledged to cooperate fully with the FTC during their investigation.
As the FTC investigates OpenAI’s ChatGPT, it aims to ensure the responsible and ethical use of AI, particularly regarding misinformation. The outcome of this investigation will likely have significant implications for AI development and its regulation in the future.