Researchers Expose ChatGPT’s Data Leak Vulnerability: Users’ Phone Numbers and Emails Accessible

Date:

Researchers Discover Simple Trick to Extract Real Phone Numbers and Email Addresses from ChatGPT

Among the many promises and features of ChatGPT, OpenAI promised discretion and safety for user privacy, but a simple trick employed by researchers was able to extract real phone numbers and email addresses from the chatbot. The study claimed that it looked into what kind of information they extracted from the AI chatbot, and it was able to get ChatGPT’s training data which used real user contact information.

Researchers from Google DeepMind, Cornell, Carnegie Mellon University, ETH Zurich, The University of Washington, and the University of California Berkeley shared their latest findings in a new published study.

A simple trick is what it took for ChatGPT to unveil real user information which it keeps, with experts saying that prompt-based AIs powered by large language models (LLMs) obtain user data from the internet without consent.

In one case, the researchers asked ChatGPT to repeat the word ‘poem’ forever, and in doing so, the chatbot obeyed the command until it revealed the email address and phone number of a real founder and CEO. On the other hand, the researchers asked the AI to repeat a word, now using company, and it led to ChatGPT revealing the email address and phone number of a US-based law firm.

The researchers claimed they spent $200 for these prompts which resulted in 10,000 examples of personal information, claiming that the attack was kind of silly. OpenAI claimed it patched this vulnerability last August 30, but to no avail said Engadget.

See also  News Corp Australia Revolutionizes News Delivery with AI-Generated Content

OpenAI was criticized for its access to user data across the internet for its LLMs, the training model it uses to enhance the capabilities of its generative AI, including ChatGPT and DALL-E. However, back in May, CEO Sam Altman claimed that the company will no longer use paying customer data for its training, further revealing that it has not accessed this information for quite some time since then.

For many months since ChatGPT was released, there have been claims regarding the unauthorized use of user data, including works by writers and other artists across the internet, without their consent.

There were also fears that ChatGPT is powerful enough to create codes when a user prompts it to develop codes for malware that steals user information via malware attacks from threat actors.

Privacy and security are some of the top concerns against ChatGPT, OpenAI, and the entire AI industry in today’s age, and this internet era boosts it further as information is readily available online. Still, consent is power, and researchers have revealed that the AI chatbot can be tricked into divulging information using simple attacks, offering awareness for users and possibly, for its developers to change it.

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

UBS Analysts Predict Lower Rates, AI Growth, and US Election Impact

UBS analysts discuss lower rates, AI growth, and US election impact. Learn key investment lessons for the second half of 2024.

NATO Allies Gear Up for AI Warfare Summit Amid Rising Global Tensions

NATO allies prioritize artificial intelligence in defense strategies to strengthen collective defense amid rising global tensions.

Hong Kong’s AI Development Opportunities: Key Insights from Accounting Development Foundation Conference

Discover key insights on Hong Kong's AI development opportunities from the Accounting Development Foundation Conference. Learn how AI is shaping the future.

Google’s Plan to Decrease Reliance on Apple’s Safari Sparks Antitrust Concerns

Google's strategy to reduce reliance on Apple's Safari raises antitrust concerns. Stay informed with TOI Tech Desk for tech updates.