Americans Express Concerns Over Political Misinformation Spread by AI
A recent survey has highlighted Americans’ apprehension regarding the spread of political misinformation through the use of artificial intelligence (AI). According to the poll, only 6% of respondents believe that AI will decrease the dissemination of false information, while a third of participants feel that it will not have a significant impact. The experiences of the previous year, primarily involving social media, have added to people’s worries about the potential amplification of misinformation in the upcoming 2024 election.
Rosa Rangel, a 66-year-old Democrat from Fort Worth, Texas, expressed her concerns, citing the proliferation of lies on social media during the past year. She believes that AI will exacerbate the situation in 2024, comparing it to a pot that keeps brewing over. While the survey revealed that just 30% of American adults have used AI chatbots or image generators, and less than half are familiar with AI tools, there seems to be wide consensus that political candidates should not employ AI.
The poll asked respondents about their views on different ways in which candidates could potentially use AI, and the majority responded negatively. Clear majorities stated that it would be unfavorable for presidential candidates to employ AI in fabricating false or misleading media for political ads (83%), editing or touch-up photos and videos (66%), personalizing political ads for individual voters (62%), or utilizing chatbots to answer voters’ questions (56%).
Both Republicans and Democrats exhibited bipartisan pessimism towards the use of AI by candidates. Majorities from both parties expressed their disapproval of the creation of fake imagery or videos (85% of Republicans and 90% of Democrats) and using chatbots to answer voter inquiries (56% of Republicans and 63% of Democrats).
Interestingly, this sentiment comes after AI has already made an appearance in the Republican presidential primary campaign. In April, the Republican National Committee released an AI-generated advertisement intended to depict the future of the country if President Joe Biden is reelected. The ad featured realistic-looking but fabricated images of boarded-up storefronts, military patrols, and panicked waves of immigrants. Similarly, Ron DeSantis, the Republican governor of Florida, employed AI-generated images in his campaign, manipulating visuals to make it appear as if former President Donald Trump was embracing Dr. Anthony Fauci, a prominent figure in the COVID-19 pandemic response. The ad also used an AI voice-cloning tool to mimic Trump’s voice in a social media post.
Andie Near, a 42-year-old from Holland, Michigan, who typically favors Democratic candidates, emphasized the importance of politicians campaigning based on their merits rather than their ability to instill fear. Near, who has used AI tools to retouch images in her work, believes that misleading uses of AI can deepen and worsen the effects caused by conventional attack ads.
Thomas Besgen, a 21-year-old Republican college student from Connecticut, also expressed his disagreement with campaigns utilizing deepfake sounds or imagery to make it seem like a candidate said something they never actually said. He considers such actions morally wrong and advocates for the banning of deepfake ads. If an outright ban is not possible, he suggests that these ads be labeled as AI-generated.
Besgen, who utilizes AI tools like ChatGPT for educational and recreational purposes, expressed his trust in the information provided by these platforms. He plans to use AI tools to learn more about the presidential candidates, a practice that only 5% of adults say they are likely to engage in.
Despite this, the survey results indicated widespread skepticism towards the information provided by AI chatbots, with only 5% of respondents expressing high confidence in the factual accuracy of the information. The majority of adults (61%) stated that they are not very or not at all confident in the reliability of AI-generated content. This aligns with the cautionary advice from AI experts, who warn against relying on chatbots for accurate information. These AI models excel at mimicking writing styles but are prone to fabricating information.
The survey also revealed that adults from both major political parties are receptive to regulations on AI. Respondents displayed more positive than negative reactions to potential bans or labeling requirements for AI-generated content suggested by tech companies, the federal government, social media platforms, or news media outlets. About two-thirds supported a government ban on AI-generated content containing false or misleading images in political ads, while a similar number favored technology companies labeling all AI-generated content on their platforms.
To address concerns surrounding AI-generated false or misleading information in the 2024 presidential elections, Americans believe that shared responsibility lies with technology companies (63%), news media organizations (53%), social media companies (52%), and the federal government (49%).
As the conversation surrounding AI and its impact on politics continues, Americans remain cautious and advocate for regulatory measures to ensure the responsible use of AI tools. The survey highlights the importance of transparency, ethical standards, and accountability in the deployment and regulation of AI technology.