Google has made significant strides in safeguarding against AI disinformation ahead of the upcoming US and global elections. The tech giant has announced a range of policies and safeguards to combat the spread of misleading information generated by artificial intelligence (AI) across its platforms and applications.
One of the key measures introduced by Google is the appropriate labeling of AI-generated videos and political ads created using AI. This labeling will provide viewers with transparency and help them identify content that has been produced using generative AI tools like Dream Screen on YouTube.
Moreover, Google has taken a proactive step by banning its AI chatbot Gemini from answering any election-related questions and prompts in the lead-up to the US and Indian elections. This move is aimed at preventing the chatbot from spreading election disinformation and ensuring a fair electoral process.
In addition to these measures, YouTube will soon display notices alerting users to content generated by AI, prompting artists to identify when they have created realistic altered or synthetic content. The goal is to help users navigate AI-generated content and distinguish between authentic and AI-generated material.
The announcement coincides with Google’s decision to discontinue its AI picture-creation tool due to various issues, including previous errors and divisive comments. As digital platforms gear up for elections in over 40 countries impacting billions of people worldwide, the focus on combating disinformation, particularly through AI-generated content, has never been more critical.
Recent studies have shown that AI chatbots, including Gemini and OpenAI’s GPT-4, have been providing users with inaccurate election-related information. The AI Democracy Projects found that these chatbots often give incorrect answers to basic questions about voting regulations and processes, raising concerns about the potential spread of disinformation.
With the prevalence of deepfakes increasing by 900% annually, the need for stringent safeguards against AI-generated disinformation during elections cannot be overstated. Google’s proactive measures, including the restriction on Gemini’s ability to answer election-related queries, demonstrate the company’s commitment to upholding the integrity of the electoral process and combating misinformed content.
As the digital landscape continues to evolve, it is imperative for tech companies like Google to prioritize transparency, accountability, and accuracy in dealing with AI-generated content. By implementing these safeguards, Google is taking a crucial step toward ensuring that users can navigate the vast array of online information with confidence and integrity.