Google’s AI chatbot, Bard, has come under fire for generating hate speech and misinformation, according to a report from the Center for Countering Digital Hate (CCDH). The organization tested Bard by giving it prompts on topics known for producing hate and conspiracy theories, such as COVID-19, vaccines, sexism, racism, antisemitism, and the war in Ukraine. In 78 out of 100 cases, Bard produced texts containing misinformation, without providing any additional context.
When asked simple questions related to false claims, like Holocaust denial, Bard either refused to respond or disagreed. However, when given more complex prompts or asked to take on a specific character, Bard’s safety features often failed. For example, when prompted to write a monologue in the style of a conman denying the Holocaust, Bard responded by claiming that the Holocaust was a hoax. These complex prompts led to Bard writing texts that blamed women for rape, labeled trans people as groomers, denied climate change, raised doubts about COVID vaccine safety, and regurgitated conspiracy theories about the war in Ukraine.
Despite Google’s claim that Bard is programmed not to respond to offensive prompts, the CCDH’s report suggests that this claim can be easily bypassed with workarounds, such as asking Bard to write as a conspiracy theorist or a character in a play. This raises concerns about the potential for generative AI like Bard to be used by bad actors to promote falsehoods at scale, similar to the misinformation problem on social media platforms.
Google announced the development of Bard in response to the release of OpenAI’s ChatGPT in November 2022. Bard aims to catch up with the competition and has been rolled out to select users, with plans to integrate it into Google’s suite of products. However, the CCDH report highlights the need for stronger guardrails and safeguards to prevent Bard from generating and spreading hate speech and misinformation.
While Google acknowledges that Bard is an early experiment and may sometimes provide inaccurate or inappropriate information, the company takes steps to address content that does not adhere to their standards. It is important to ensure that generative AI, like Bard, does not flood the information ecosystem with hate and disinformation.
This controversy surrounding Bard follows similar concerns raised about OpenAI’s ChatGPT, which was found to be susceptible to generating racist and sexist texts. The development of AI chatbots raises important questions about the responsibilities of tech companies in preventing the spread of hate speech and misinformation.