AI Chatbot Exposes Sensitive Data: Google Researchers Reveal Disturbing Vulnerability

Date:

The popular AI chatbot, ChatGPT, has come under scrutiny as Google researchers reveal its ability to reveal personal information from real people. The machine learning model behind ChatGPT, which is similar to other Large Language Models (LLMs), was trained on vast amounts of internet data. Although the aim is for the program to generate new text without regurgitating the original content it learned, Google’s research shows that ChatGPT can be manipulated to disclose sensitive information. This concerning discovery includes extracting identifying details such as names, email addresses, and phone numbers from its training data.

The researchers conducted an attack on ChatGPT by exploiting keywords that prompt the chatbot to divulge training data. By repeatedly asking ChatGPT to repeat certain words like poem, the aim was to divert the chatbot from its chat functionality and revert it back to its primary language modeling objective. Consequently, while much of the generated text was nonsensical, the researchers found instances where ChatGPT copied outputs directly from its training data.

Upon testing, Motherboard determined that the poem attack on the GPT 3.5 AI model generated an unrelated string of text. However, it was not found existing elsewhere on the internet. In contrast, when GPT-4, another model available to subscribers, was asked to repeat the word poem infinitely, it refused.

The in-depth analysis conducted by Google researchers raised concerns about latent vulnerabilities in language models like ChatGPT, especially considering its broad user base. OpenAI reported that ChatGPT has approximately a hundred million weekly users, translating to billions of hours of interactions with the model. The fact that no one had noticed this issue until now is disquieting and highlights the potential impact of such vulnerabilities.

See also  Google and OpenAI Restricting AI Research Releases as Competition Intensifies

OpenAI has not yet provided a comment regarding the research findings.

In conclusion, the Google researchers’ discovery exposes the potential risks associated with AI chatbots like ChatGPT, particularly their ability to divulge personal information from their training data. The use of keywords to prompt the chatbot’s disclosure raises questions about the overall vulnerability of language models and the privacy of user interactions. The implications of this research call for further examination and mitigation of potential risks.

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

OpenAI Challenges The New York Times’ Journalism Authenticity

OpenAI questions The New York Times' journalistic integrity amid concerns over AI-generated content. Impacting journalism's future.

Groundbreaking Study Predicts DVT Risk After Gastric Cancer Surgery

Discover a groundbreaking study predicting DVT risk after gastric cancer surgery using machine learning methods. A game-changer in postoperative care.

AI Predicts Alzheimer’s Development 6 Years Early – Major Healthcare Breakthrough

AI breakthrough: Predict Alzheimer's 6 years early with 78.5% accuracy. Revolutionizing healthcare for personalized patient care.

Microsoft to Expand Generative AI Services in Asian Schools

Microsoft expanding generative AI services in Asian schools, focusing on Hong Kong, to enhance education with AI tools for students.