Google has issued a warning to its employees against using chatbots like its own Bard AI and ChatGPT for discussing sensitive company matters. Alphabet, which promotes the software internationally, fears that human reviewers reading the chats could raise the risk of leaks. Similar AI has also been known to replicate data from training, posing further risks. Alphabet has also urged its engineers to stay away from the chatbots’ generated code. Other companies like Samsung, Amazon, and Deutsche Bank have implemented similar safeguards for AI chatbots. Though Bard AI offers code suggestions, the company warned engineers against relying on it too much. These concerns illustrate Google’s desire to protect itself from potential financial fallout that may arise due to competition from OpenAI and Microsoft Corp in the cloud revenue and advertising space.
Bard AI, ChatGPT Chatbots Raise Concerns at Google as Employees Advised to Be Cautious
Date:
Frequently Asked Questions (FAQs) Related to the Above News
Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.