Google’s recent AI language model, Bard, has come under severe criticism from its employees, with some deeming it a “pathological liar,” as per a report by Bloomberg. Multiple current and former Google employees, eighteen in total, shared their concerns about the chatbot and its potential implications, particularly with the ethical side of things.
One employee noted that Bard’s responses to queries related to scuba diving could potentially put people in danger, while another employee noted that its advice on plane landings could also be seen as dangerous. Meanwhile, other employees simply referred to Bard as “cringeworthy.”
Google’s ethics team also appears to have grown increasingly demoralized and disempowered, according to testimony from current and former employees. They noted how Google grows increasingly keen to keep up with its competition but fails to give priority to ethical commitments.
The Verge conducted research to compare Google’s Bing, OpenAI’s ChatGPT, and Google’s Bard and detailed how the latter offered “bizarre” errors when responding to questions, such as instructions for baking a chocolate cake and the average salary for a plumber in New York.
In response, Google said that it takes its ethical commitments seriously and continues to invest in teams that apply its AI Principles. Furthermore, employees reportedly took to the forum Memegen to express their concerns on Bard’s launch, with some called it “rushed” and “botched.”
Timnit Gebru had co-headed Google’s ethical AI team before she was suddenly fired. Gebru had been researching language processing models, including ChatGPT and Bard, and was against companies’ lack of transparency. Her dismissal following her pushback to Google’s leadership made her former colleagues write a letter demanding structural changes.
Nevertheless, Bard was launched for users in the U.S. and U.K. in April. Senior product director Jack Krawczyk defended the chatbot, claiming testers had found it useful. Bard displayed a warning notice showing that it did not always provide correct answers and demonstrated some mistakes while in a demo.
Google is apparently still in the race towards AI dominance, but with its recent controversy, it will likely have to pay closer attention to its ethical commitments.