In a recent discovery by CBS News, it has come to light that ChatGPT, developed by OpenAI, is providing incorrect information regarding voting details in crucial battleground states. This revelation raises concerns about the potential impact of AI on elections.
Reporters from CBS asked ChatGPT basic questions about voting requirements and polling locations in states like Michigan, Pennsylvania, North Carolina, and Wisconsin – states that could significantly influence the outcome of the upcoming US presidential election. Shockingly, ChatGPT provided inconsistent and inaccurate answers to the same questions on different devices, indicating a serious flaw in its algorithm.
OpenAI had previously stated that queries about voting on ChatGPT would redirect users to the non-partisan CanIVote.org website. However, CBS reporters did not always receive this link when inquiring about voting information, highlighting a significant gap in the system.
An OpenAI spokesperson mentioned, We’ve also developed partnerships to ensure people using ChatGPT for information about voting get directed to authoritative sources, indicating that they are working to improve the accuracy of the information provided. However, the recent findings from CBS suggest that there is still much room for improvement.
While these errors may seem minor, they could have a significant impact on the election outcome if voters rely on incorrect information from ChatGPT. This situation underscores the risks associated with relying on large language models like ChatGPT for critical information.
The incident with ChatGPT serves as a reminder that AI models are susceptible to errors and hallucinations, highlighting the need for continued vigilance and oversight when using such technologies in sensitive contexts like elections. The potential consequences of misinformation spread by AI highlight the importance of ensuring the accuracy and reliability of these systems in the future.