Kaspersky Highlights Caution When Sharing Sensitive Data with AI Chatbots
In response to the latest news regarding the introduction of new functionality for ChatGPT, experts from cybersecurity firm Kaspersky are urging users to exercise caution when sharing sensitive information with AI chatbots. OpenAI’s implementation of custom GPTs (Generative Pre-trained Transformers) that can be incorporated into dialogs with the original ChatGPT raises concerns about the potential risks associated with data confidentiality.
Vladislav Tushkanov, Research Development Group Manager at Kaspersky’s Machine Learning Technology Research Team, emphasizes the need for users to be aware and cautious due to the enhanced capabilities of GPTs. These models can leverage external resources and tools to deliver advanced functionality. To address potential risks of data exfiltration during dialogs, OpenAI has introduced a mechanism that permits users to review and approve the actions of custom GPTs. When a custom GPT attempts to send data to a third-party service, users are prompted to allow or deny it. Notably, users can inspect the data about to be transmitted by utilizing a drop-down symbol in the interface. The same security mechanism applies to the newly added @mention functionality.
While this serves as a protective measure, users must be diligent in their review of each request, as it may affect their overall experience. It is essential to understand that there are various other ways in which user data may potentially leak from a chatbot service. These include errors or vulnerabilities in the service itself, information retention during model training, or unauthorized access to user accounts. Consequently, it is crucial to exercise caution when sharing personal and confidential information with any chatbot service online.
As the demand for AI chatbots continues to rise, it is imperative for users to prioritize data privacy and security. Remaining vigilant regarding the types of information shared, closely scrutinizing permissions requested by custom GPTs, and adopting best practices for online privacy are essential steps users can take to protect their sensitive data.