Google’s parent company, Alphabet, has warned its employees about the potential dangers of AI chatbots. According to a report by Reuters, the company has advised staff not to enter confidential materials into chatbots, and engineers have been asked to avoid the direct use of chatbot-generated computer code. The need for these security measures has arisen from concerns that human reviewers who power chatbots could read sensitive data entered into the chat, potentially compromising security. Furthermore, researchers have found that AI could reproduce absorbed data, creating a leak risk. Despite these concerns, Google is continuing to invest billions of dollars in its AI programs and expand its AI toolset to other products such as Maps and Lens. Other companies, including Samsung, Amazon, and Deutsche Bank, have also set similar AI chatbot standards for their employees, with Samsung outright banning ChatGPT and other generative AI from its workplace after allegedly suffering three leaks earlier this year.
Warning Staff About Chatbots Could Be a Red Flag for Google
Date:
Frequently Asked Questions (FAQs) Related to the Above News
Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.