. Google's new AI chatbot, Bard, was meant to make complex topics simpler, but has instead prompted a $100 billion drop in market value due to an inaccuracy in their promotional video. They have faced criticism for their lack of information in Google searches, and their reluctance to prioritize AI ethics. With 80,000 employees providing negative feedback upon testing, Google released it with clear notifications. Competitors like Microsoft and OpenAI's ChatGPT have raised concerns, and Google's former AI ethicality leader has expressed her frustration. With clearer focus on AI ethics and transparency, Google can compete with the best.
. ChatGPT is an AI-driven chatbot developed by OpenAI with growing traction amongst users. Microsoft's own AI chip, codenamed "Athena", is being tested internally to help lower costs of OpenAI's services like ChatGPT, which come with a hefty $700,000 a day price tag. Microsoft's aggressive move to build its own AI asserts their leadership amongst AI technology companies, challenging giant like Google who rushed its AI product, Bard, to the market. Optimism is high as SemiAnalysis' Chief Analyst Dylan Patel indicates that Microsoft's own AI chip could provide "the needed boost to Microsoft in this competitive environment."
. Google's new AI chatbot, Bard, was meant to make complex topics simpler, but has instead prompted a $100 billion drop in market value due to an inaccuracy in their promotional video. They have faced criticism for their lack of information in Google searches, and their reluctance to prioritize AI ethics. With 80,000 employees providing negative feedback upon testing, Google released it with clear notifications. Competitors like Microsoft and OpenAI's ChatGPT have raised concerns, and Google's former AI ethicality leader has expressed her frustration. With clearer focus on AI ethics and transparency, Google can compete with the best.
Google, the tech giant, has been making waves with the launch of their AI chatbot Bard. According to Bloomberg, employees who were tasked to evaluate the AI have given it harsh reviews, calling it "cringeworthy" and a "pathological liar". This shines a light on the potential deprioritization of AI ethics for the sake of release and users of the beta version have reported issues such as wrong and plagiarized advice, and even getting math questions wrong.
Google's AI language model, Bard, has come under harsh criticism from employees, with some labeling it a "pathological liar." 18 current and former employees shared their concerns, including the potential implications of its ethical use. Meanwhile, Google's ethics team seems demoralized and powerless. With Bard being launched in April, Google must renew their focus on their ethical commitments.
Explore the evolution of tech policy from Obama's optimism to Harris's vision at the Democratic National Convention. What's next for Democrats in tech?