Researchers Discover Simple Trick to Extract Real Phone Numbers and Email Addresses from ChatGPT
Among the many promises and features of ChatGPT, OpenAI promised discretion and safety for user privacy, but a simple trick employed by researchers was able to extract real phone numbers and email addresses from the chatbot. The study claimed that it looked into what kind of information they extracted from the AI chatbot, and it was able to get ChatGPT’s training data which used real user contact information.
Researchers from Google DeepMind, Cornell, Carnegie Mellon University, ETH Zurich, The University of Washington, and the University of California Berkeley shared their latest findings in a new published study.
A simple trick is what it took for ChatGPT to unveil real user information which it keeps, with experts saying that prompt-based AIs powered by large language models (LLMs) obtain user data from the internet without consent.
In one case, the researchers asked ChatGPT to repeat the word ‘poem’ forever, and in doing so, the chatbot obeyed the command until it revealed the email address and phone number of a real founder and CEO. On the other hand, the researchers asked the AI to repeat a word, now using company, and it led to ChatGPT revealing the email address and phone number of a US-based law firm.
The researchers claimed they spent $200 for these prompts which resulted in 10,000 examples of personal information, claiming that the attack was kind of silly. OpenAI claimed it patched this vulnerability last August 30, but to no avail said Engadget.
OpenAI was criticized for its access to user data across the internet for its LLMs, the training model it uses to enhance the capabilities of its generative AI, including ChatGPT and DALL-E. However, back in May, CEO Sam Altman claimed that the company will no longer use paying customer data for its training, further revealing that it has not accessed this information for quite some time since then.
For many months since ChatGPT was released, there have been claims regarding the unauthorized use of user data, including works by writers and other artists across the internet, without their consent.
There were also fears that ChatGPT is powerful enough to create codes when a user prompts it to develop codes for malware that steals user information via malware attacks from threat actors.
Privacy and security are some of the top concerns against ChatGPT, OpenAI, and the entire AI industry in today’s age, and this internet era boosts it further as information is readily available online. Still, consent is power, and researchers have revealed that the AI chatbot can be tricked into divulging information using simple attacks, offering awareness for users and possibly, for its developers to change it.