Google’s AI Chatbot Gemini Can’t Answer Global Election Questions
In the latest news, Google has announced restrictions on its AI chatbot, Gemini, preventing it from addressing inquiries regarding the global elections scheduled for this year. The decision aims to prevent potential errors in the utilization of the technology, amidst concerns about the dissemination of misinformation and fake news through social media channels.
Advancements in generative AI, encompassing image and video generation, have raised apprehensions about the proliferation of misleading content. Consequently, the government has taken steps to regulate this technology and promote accuracy in information dissemination.
Responding to queries about the upcoming presidential election featuring Joe Biden and Donald Trump, Gemini stated, I am still learning how to answer this question. These restrictions were initially announced in the United States in December and are set to be implemented before the election. Users are encouraged to utilize Google search for information until Gemini’s full functionality is restored.
The limitations imposed by Google pertain to election-related queries to ensure the accuracy of responses, particularly in anticipation of various elections worldwide in 2024. With national elections forthcoming in countries like South Africa and India, one of the largest democracies globally, this cautious approach is deemed necessary.
In light of these developments, the Indian government has mandated that tech firms seek approval before releasing artificial intelligence tools with a potential for inaccuracies. Companies are also required to caution users about tools that may provide erroneous responses, emphasizing the importance of responsible AI deployment.
Google faced a setback recently when it paused the image-generation feature of Gemini due to inaccuracies in historical depictions of individuals. CEO Sundar Pichai acknowledged the bias in the chatbot’s responses, deeming them completely unacceptable. The company is actively working to rectify these issues and enhance the bot’s reliability.
Heading into the June European Parliament elections, Meta Platforms, Facebook’s parent company, announced plans to establish a team dedicated to combating disinformation and the misuse of generative AI technology. This initiative underscores the growing concern surrounding the spread of false information through digital platforms.
As the landscape of artificial intelligence evolves, companies are increasingly vigilant about mitigating the risks associated with misinformation and biased responses. By prioritizing accuracy and accountability, tech firms aim to uphold the integrity of information shared on online platforms.