Americans Concerned About Political Misinformation Spread by AI, US

Date:

Americans Express Concerns Over Political Misinformation Spread by AI

A recent survey has highlighted Americans’ apprehension regarding the spread of political misinformation through the use of artificial intelligence (AI). According to the poll, only 6% of respondents believe that AI will decrease the dissemination of false information, while a third of participants feel that it will not have a significant impact. The experiences of the previous year, primarily involving social media, have added to people’s worries about the potential amplification of misinformation in the upcoming 2024 election.

Rosa Rangel, a 66-year-old Democrat from Fort Worth, Texas, expressed her concerns, citing the proliferation of lies on social media during the past year. She believes that AI will exacerbate the situation in 2024, comparing it to a pot that keeps brewing over. While the survey revealed that just 30% of American adults have used AI chatbots or image generators, and less than half are familiar with AI tools, there seems to be wide consensus that political candidates should not employ AI.

The poll asked respondents about their views on different ways in which candidates could potentially use AI, and the majority responded negatively. Clear majorities stated that it would be unfavorable for presidential candidates to employ AI in fabricating false or misleading media for political ads (83%), editing or touch-up photos and videos (66%), personalizing political ads for individual voters (62%), or utilizing chatbots to answer voters’ questions (56%).

Both Republicans and Democrats exhibited bipartisan pessimism towards the use of AI by candidates. Majorities from both parties expressed their disapproval of the creation of fake imagery or videos (85% of Republicans and 90% of Democrats) and using chatbots to answer voter inquiries (56% of Republicans and 63% of Democrats).

See also  Increasing Number of Scam ChatGPT Apps in App Stores

Interestingly, this sentiment comes after AI has already made an appearance in the Republican presidential primary campaign. In April, the Republican National Committee released an AI-generated advertisement intended to depict the future of the country if President Joe Biden is reelected. The ad featured realistic-looking but fabricated images of boarded-up storefronts, military patrols, and panicked waves of immigrants. Similarly, Ron DeSantis, the Republican governor of Florida, employed AI-generated images in his campaign, manipulating visuals to make it appear as if former President Donald Trump was embracing Dr. Anthony Fauci, a prominent figure in the COVID-19 pandemic response. The ad also used an AI voice-cloning tool to mimic Trump’s voice in a social media post.

Andie Near, a 42-year-old from Holland, Michigan, who typically favors Democratic candidates, emphasized the importance of politicians campaigning based on their merits rather than their ability to instill fear. Near, who has used AI tools to retouch images in her work, believes that misleading uses of AI can deepen and worsen the effects caused by conventional attack ads.

Thomas Besgen, a 21-year-old Republican college student from Connecticut, also expressed his disagreement with campaigns utilizing deepfake sounds or imagery to make it seem like a candidate said something they never actually said. He considers such actions morally wrong and advocates for the banning of deepfake ads. If an outright ban is not possible, he suggests that these ads be labeled as AI-generated.

Besgen, who utilizes AI tools like ChatGPT for educational and recreational purposes, expressed his trust in the information provided by these platforms. He plans to use AI tools to learn more about the presidential candidates, a practice that only 5% of adults say they are likely to engage in.

See also  US House Committee Launches Bipartisan Working Group to Explore AI's Impact on Financial Services

Despite this, the survey results indicated widespread skepticism towards the information provided by AI chatbots, with only 5% of respondents expressing high confidence in the factual accuracy of the information. The majority of adults (61%) stated that they are not very or not at all confident in the reliability of AI-generated content. This aligns with the cautionary advice from AI experts, who warn against relying on chatbots for accurate information. These AI models excel at mimicking writing styles but are prone to fabricating information.

The survey also revealed that adults from both major political parties are receptive to regulations on AI. Respondents displayed more positive than negative reactions to potential bans or labeling requirements for AI-generated content suggested by tech companies, the federal government, social media platforms, or news media outlets. About two-thirds supported a government ban on AI-generated content containing false or misleading images in political ads, while a similar number favored technology companies labeling all AI-generated content on their platforms.

To address concerns surrounding AI-generated false or misleading information in the 2024 presidential elections, Americans believe that shared responsibility lies with technology companies (63%), news media organizations (53%), social media companies (52%), and the federal government (49%).

As the conversation surrounding AI and its impact on politics continues, Americans remain cautious and advocate for regulatory measures to ensure the responsible use of AI tools. The survey highlights the importance of transparency, ethical standards, and accountability in the deployment and regulation of AI technology.

Frequently Asked Questions (FAQs) Related to the Above News

What are the main concerns Americans have about political misinformation spread by AI?

The main concerns include the potential amplification of misinformation in the upcoming 2024 election, given the experiences of the previous year. People worry that AI will exacerbate the proliferation of lies on social media and create a more challenging environment for distinguishing fact from fiction.

How do Americans feel about political candidates using AI?

The majority of Americans feel unfavorably about political candidates employing AI for various purposes. They expressed negative views on the use of AI to fabricate false or misleading media for political ads, edit or touch-up photos and videos, personalize political ads for individual voters, or utilize chatbots to answer voters' questions.

Did Republicans and Democrats have different opinions on the use of AI by candidates?

Interestingly, both Republicans and Democrats exhibited bipartisan pessimism towards the use of AI by candidates. Majorities from both parties disapprove of the creation of fake imagery or videos and using chatbots to answer voter inquiries.

Have AI-generated political ads already appeared in campaigns?

Yes, AI-generated political ads have already made an appearance in campaigns. For example, in the Republican presidential primary campaign, an AI-generated advertisement was released by the Republican National Committee, and AI-generated images were used by Ron DeSantis, the Republican governor of Florida, in his campaign.

What do Americans think about the use of deepfake technology in political campaigns?

Americans have expressed disagreement with campaigns utilizing deepfake sounds or imagery to make it seem like a candidate said something they never actually said. Many consider such actions morally wrong and advocate for the banning of deepfake ads or at least labeling them as AI-generated if a ban is not possible.

How confident are Americans in the accuracy of information provided by AI chatbots?

The survey results indicated widespread skepticism towards the information provided by AI chatbots. Only 5% of respondents expressed high confidence in the factual accuracy of the information, while the majority stated that they are not very or not at all confident in the reliability of AI-generated content.

Are Americans receptive to regulations on AI?

Yes, adults from both major political parties showed receptiveness to regulations on AI. Respondents displayed more positive than negative reactions to potential bans or labeling requirements for AI-generated content, suggested by tech companies, the federal government, social media platforms, or news media outlets.

Who do Americans believe holds shared responsibility in addressing AI-generated misinformation?

Americans believe that shared responsibility lies with technology companies, news media organizations, social media companies, and the federal government. They believe that these entities should play a role in addressing AI-generated false or misleading information in the 2024 presidential elections.

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Obama’s Techno-Optimism Shifts as Democrats Navigate Changing Tech Landscape

Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?

Tech Evolution: From Obama’s Optimism to Harris’s Vision

Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?

Tonix Pharmaceuticals TNXP Shares Fall 14.61% After Q2 Earnings Report

Tonix Pharmaceuticals TNXP shares decline 14.61% post-Q2 earnings report. Evaluate investment strategy based on company updates and market dynamics.

The Future of Good Jobs: Why College Degrees are Essential through 2031

Discover the future of good jobs through 2031 and why college degrees are essential. Learn more about job projections and AI's influence.