Google is the world’s largest search engine and a leading technology company. Founded in 1998, Google has become a powerful force in the tech sector, with products and services that span multiple areas such as search and advertisement, as well as mobile and artificial intelligence. Recently, some Google employees have questioned the accuracy and potential hazards of the company’s artificial intelligence development, especially with regard to their chatbot, Bard.
Two employees tasked with reviewing the AI product attempted to block its release, citing accuracy and safety issues, according to The New York Times. Jen Gennai, director of Google’s Responsible Innovation group, oversaw the process and is reported to have removed the recommendation and downplayed the risks of the chatbot. After the Times report, Gennai responded that she had corrected inaccurate assumptions and in fact added more risks and harms that needed considering. Nevertheless, the company limitedly released Bard.
This incident has spurred debate among the AI development community, and experts have called for a six-month pause on advanced AI development due to the potential risks of the technology. John Burden, an AI expert at the Centre for the Study of Existential Risk, stressed the speed of AI development as a major concern, with advancements that were unimaginable five years ago to come and go in a blink of an eye. It is essential that development of artificial intelligence technology is done responsibly and safety is taken into consideration.