The upcoming 2024 US presidential election will be the first time a challenger faces off against incumbent Donald Trump. But what if the entire process was influenced by AI-powered misinformation campaigns? This is a worrying reality suggested by recent discussions in Congress, featuring Sam Altman, CEO of ChatGPT creator OpenAI, and AI expert Gary Marcus of New York University.
The potential risks are clear. ChatGPT, as well as other language models, are able to predict public opinion by gathering media feeds, an observation made by Senator Josh Hawley. Large scale AI-run misinformation can lead to destabilization attempts, insider manipulation, and even changing people’s political opinions without them realizing it. Google searches can also have an effect on undecided voters when conducted close to the polling date.
Altman advocates for “AI regulation,” including clear labeling of AI-generated content. In light of potential outcomes concerning the 2020 election and social media, the need for swift action is paramount. AI, unlike social media, has an even greater capacity for damage; any delay in regulation could mean disaster for democracy itself.
OpenAI is a leading artificial intelligence research laboratory, co-founded by Sam Altman and world-famous entrepreneur Elon Musk. Since its launch in December 2015, the company has conducted groundbreaking research in the field of AI and robotics. Their innovation includes language models like ChatGPT.
Sam Altman is the Chief Executive of OpenAI since its launch in 2015. He is also the President of the popular startup accelerator Y Combinator, which has funded over 2,000 companies since its inception. On December 10, 2019, Altman appeared before Congress to discuss the potential risks of AI, including misinformation campaigns orchestrated by powerful bots like ChatGPT.