Google has recently faced criticism after its AI-powered Google Nest Assistant was caught seemingly refusing to provide information about the number of Jews killed in the Holocaust. The incident came to light when a video went viral on social media, showing the device unable to answer basic questions related to the Holocaust but having no trouble responding to inquiries about the Nakba, the Palestinian catastrophe.
In the video, a user asked Google Nest Assistant how many Jews were killed by the Nazis during World War II, to which the AI replied, Sorry, I don’t know. Similar responses were given when questions about the Holocaust, Adolf Hitler’s targets, concentration camp deaths, and the overall Holocaust figures were asked. However, the device provided detailed information about the Nakba, referring to it as the ethnic cleansing of Palestinians.
Renowned author Tim Urban shared the video on social media and expressed his disbelief by trying it himself, obtaining the same results. It was noted that Google Nest provided answers for other historical events like World War II casualties and the Rwandan genocide. This raised concerns about the reliability of Google’s AI technology and the trustworthiness of the information provided by the company.
Venture capitalist Tal Morgenstern also criticized the incident, attributing it to malicious human intervention rather than AI malfunction. The founder of the Foundation for Defense of Democracies, Clifford D. May, condemned the episode, highlighting it as a modern form of Holocaust denial through artificial intelligence.
In response to the backlash, a Google spokesperson stated that the behavior was unintended and specific to certain cases and devices. The company took immediate steps to address the issue and fix the bug. The incident has sparked discussions about the importance of transparency and accuracy in AI technology to prevent misinformation and ensure trust in the information provided to users.