Google recently debuted its new chatbot AI, Bard, which was meant to help simplify complex topics. However, the company ran into an issue when the chatbot made a mistake in a promotional video prior to its launch, and this caused a decrease in its market value by $100 billion. The company has been heavily criticized for its lack of information regarding the chatbot’s implementation in Google search, and for not prioritizing AI ethics, demonstrating how it is falling further behind its competitor, Microsoft.
Prior to releasing it to the general public, Google tested its Bard AI chatbot with around 80,000 of its employees. However, feedback was overwhelmingly negative – with employees describing the chatbot as a “pathological liar” and “cringe-worthy”. In one instance, the AI gave potentially fatal advice on scuba diving and airplane landings. Despite these reviews and doubts about the technology, Google released the chatbot, labeling it an “experiment” and providing clear notifications about it.
Google’s motives for the release are related to its competition with OpenAI’s ChatGPT platform. But, the decision not to put AI ethics at the forefront of the release has raised concerns. Former Google manager and president of the Signal Foundation, Meredith Whittaker, stated that if ethics aren’t prioritized over profit, they will not be effective in the long run. Google has publicly stated that AI ethics remains one of their top priorities, but employees have noted there are almost no mandatory AI ethics reviews, and Jennifer Gennai, a Google executive, seemed to suggest aiming for “80-85% fairness” rather than “99%”.
An ex-leader of Google’s AI ethical team, Margaret Mitchell, shared her frustration about Google’s AI practice, noting that the atmosphere does not support her endeavors. In the end, it is clear that Google is falling farther behind ChatGPT and its employees are concerned, but the company remains committed to its AI technology.