Experts Warn of Data Risks with AI Chatbots: Exercise Caution in Sharing

Date:

Kaspersky Highlights Caution When Sharing Sensitive Data with AI Chatbots

In response to the latest news regarding the introduction of new functionality for ChatGPT, experts from cybersecurity firm Kaspersky are urging users to exercise caution when sharing sensitive information with AI chatbots. OpenAI’s implementation of custom GPTs (Generative Pre-trained Transformers) that can be incorporated into dialogs with the original ChatGPT raises concerns about the potential risks associated with data confidentiality.

Vladislav Tushkanov, Research Development Group Manager at Kaspersky’s Machine Learning Technology Research Team, emphasizes the need for users to be aware and cautious due to the enhanced capabilities of GPTs. These models can leverage external resources and tools to deliver advanced functionality. To address potential risks of data exfiltration during dialogs, OpenAI has introduced a mechanism that permits users to review and approve the actions of custom GPTs. When a custom GPT attempts to send data to a third-party service, users are prompted to allow or deny it. Notably, users can inspect the data about to be transmitted by utilizing a drop-down symbol in the interface. The same security mechanism applies to the newly added @mention functionality.

While this serves as a protective measure, users must be diligent in their review of each request, as it may affect their overall experience. It is essential to understand that there are various other ways in which user data may potentially leak from a chatbot service. These include errors or vulnerabilities in the service itself, information retention during model training, or unauthorized access to user accounts. Consequently, it is crucial to exercise caution when sharing personal and confidential information with any chatbot service online.

See also  ChatGPT: Free Windows License Keys - Fact or Fiction?

As the demand for AI chatbots continues to rise, it is imperative for users to prioritize data privacy and security. Remaining vigilant regarding the types of information shared, closely scrutinizing permissions requested by custom GPTs, and adopting best practices for online privacy are essential steps users can take to protect their sensitive data.

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Aniket Patel
Aniket Patel
Aniket is a skilled writer at ChatGPT Global News, contributing to the ChatGPT News category. With a passion for exploring the diverse applications of ChatGPT, Aniket brings informative and engaging content to our readers. His articles cover a wide range of topics, showcasing the versatility and impact of ChatGPT in various domains.

Share post:

Subscribe

Popular

More like this
Related

WhatsApp Unveils New AI Feature: Generate Images of Yourself Easily

WhatsApp introduces a new AI feature, allowing users to easily generate images of themselves. Revolutionizing the way images are interacted with on the platform.

India to Host 5G/6G Hackathon & WTSA24 Sessions

Join India's cutting-edge 5G/6G Hackathon & WTSA24 Sessions to explore the future of telecom technology. Exciting opportunities await! #IndiaTech #5GHackathon

Wimbledon Introduces AI Technology to Protect Players from Online Abuse

Wimbledon introduces AI technology to protect players from online abuse. Learn how Threat Matrix enhances player protection at the tournament.

Hacker Breaches OpenAI, Exposes AI Secrets – Security Concerns Rise

Hacker breaches OpenAI, exposing AI secrets and raising security concerns. Learn about the breach and its implications for data security.