Alphabet Inc, the parent company of Google, has reportedly warned its employees not to enter confidential information into AI chatbots, including its own AI chatbot called Bard. The company has expressed concerns about protecting sensitive data, given that the chatbots may pose a potential risk of human reviewers reading chats and the AI reproducing data learned during training, leading to potential data leaks. Google has urged its engineers to avoid using computer code generated directly by chatbots. While companies like Amazon.com and Deutsche Bank have established guidelines on AI chatbot use, Google is also promoting Bard in over 180 countries and 40 languages. However, privacy concerns have emerged from Ireland’s Data Protection Commission about the chatbot’s impact. Despite the potential risks, the use of AI chatbots is expected to grow, transforming how businesses operate and communicate.
Google’s Lesson: AI Chatbots Like Bard and ChatGPT Can Be a Double-Edged Sword
Date:
Frequently Asked Questions (FAQs) Related to the Above News
Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.