Google is warning its staff not to use chatbots, including its own Bard, for confidential material. Alphabet, Google’s parent company, instructed employees not to provide the AI chatbots with any sensitive information. The move comes as AI companies use messages sent by users to train chatbots’ language comprehension. Human reviewers may come across internal info, or the AI could leak it by itself, as previous studies have shown. Other companies such as Walmart, Microsoft and Amazon have also issued similar alerts after finding internal information in chatbots, including ChatGPT’s answers which closely resembled private material. Google’s warning remains noteworthy since it includes restrictions on its own chatbot and cautions against the use of AI tools for coding.
Google Instructs Employees to Avoid Adding Confidential Information to AI Chatbots, Report Reveals
Date:
Frequently Asked Questions (FAQs) Related to the Above News
Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.