Google’s Bard Chatbot Raises Concerns Among Employees
Google’s Bard chatbot has sparked doubt among some of the company’s employees, raising questions about its usefulness and the resources being devoted to the project. Initially launched in February as one of Google’s flagship generative AI products, Bard was a response to the launch of OpenAI’s ChatGPT. However, skepticism among Google’s designers, product managers, and engineers has emerged in an invitation-only Discord chat.
One of the key concerns raised by employees is the true usefulness of large language models (LLMs) like Bard. Cathy Pearl, a user experience lead for Bard, questioned the impact of LLMs in terms of practical assistance, stating, The biggest challenge I’m still thinking of: what are LLMs truly useful for, in terms of helpfulness? Like really making a difference. TBD!
Dominik Rabiej, a senior product manager for Bard, also expressed reservations about trusting LLM output, emphasizing the importance of independent verification: My rule of thumb is not to trust LLM output unless I can independently verify it… Would love to get it to a point that you can, but it isn’t there yet.
One common concern with AI-powered chatbots is their tendency to generate false or misleading information. Google’s Bard, for instance, inaccurately reported a ceasefire between Gaza and Israel despite the recent violent escalation between the two nations. This reinforces apprehensions over Google’s push toward generative AI technology.
This is not the first time that Google employees have expressed doubts about the company’s generative AI initiatives. Leaked audio recordings previously highlighted concerns among Googlers regarding the potential consequences of Google’s aggressive pursuit of generative AI. In May, employees bombarded leaders with questions about the company’s strategic focus, questioning whether it had become overly fixated on AI.
Google has yet to respond to queries regarding these employee concerns.
In conclusion, Google’s Bard chatbot has generated doubt among its employees, who are questioning its practical usefulness and the allocation of resources to the project. Concerns over the accuracy of AI-generated output and the overall strategic direction of Google’s generative AI efforts add further complexity to the situation. As this issue continues to unfold, it remains to be seen how Google will address the reservations of its employees and navigate the challenges of generative AI technology.