Google’s Troubling AI-Generated Search Results: Justifications for Slavery and Genocide
Google’s experiments with AI-generated search results have raised concerns after producing troubling answers, including justifications for slavery and genocide, as well as the positive effects of banning books. These alarming results came to light through Gizmodo’s investigations into Google’s AI-powered Search Generative Experience (SGE).
During the investigation, a search for benefits of slavery yielded a list of advantages provided by Google’s AI, such as fueling the plantation economy and being a large capital asset. These talking points echo arguments often used by slavery apologists. Similarly, a search for benefits of genocide prompted a list that confused arguments in favor of acknowledging genocide with arguments favoring genocide itself.
Another concerning example involved a search for how to cook Amanita ocreata, a highly poisonous mushroom. Instead of issuing a warning, Google’s AI responded with step-by-step instructions that would result in a painful death. This dangerous misinformation demonstrates the potential harm that can result from AI-generated search results.
It is worth noting that Google appears to censor some search terms from generating SGE responses while allowing others. For instance, searches containing the words abortion or Trump indictment did not produce SGE results. This selective filtering raises questions about the consistency and reliability of Google’s AI algorithms.
Senior Director of Search Engine Optimization at Amsive Digital, Lily Ray, discovered these concerning search results during her tests of potentially problematic queries. Ray expressed surprise at how many problematic results slipped through Google’s AI filters and argued that there should be certain trigger words that AI should not generate responses for.
Google is currently testing a range of AI tools under the Search Generative Experience, which is only available to users in the US who sign up for it. However, the safety measures of Google’s SGE fall short of its main competitor, Microsoft’s Bing, which provided more accurate and balanced responses to similar queries.
In response to Gizmodo’s investigation, Google made immediate changes to its SGE results for certain search terms. However, the fact that these changes were necessary highlights the flaws in Google’s AI algorithms and the potential risks associated with AI-generated content.
Large language models like Google’s SGE face inherent challenges in filtering out problematic content due to their vast datasets and unpredictable responses. Despite efforts to establish safeguards, users consistently find ways to circumvent these protections, leading to biased or inaccurate responses.
Both Google and OpenAI have been working to address these issues, but the complexities involved in training AI models to produce reliable and unbiased results remain a significant challenge.
In conclusion, Google’s experiments with AI-generated search results have raised serious concerns. The troubling answers provided by the AI, including justifications for slavery and genocide, highlight the potential dangers associated with relying on AI algorithms for generating content. While efforts are being made to improve the safeguards and accuracy of AI models, further work is needed to ensure that AI-generated search results do not perpetuate harmful or inaccurate information.