Since OpenAI released ChatGPT last year, there have been several instances where the AI chatbot’s vulnerabilities could have allowed bad actors to exploit it for unauthorized access to users’ sensitive data. Although OpenAI recently rolled out a patch to address data leaks, it seems that the issue is not completely resolved.
According to a report from Bleeping Computer, OpenAI introduced a fix to prevent ChatGPT from leaking user data to unauthorized third parties. This data could include conversations with ChatGPT and related metadata, such as user IDs and session information.
However, security researcher Johann Rehberger, who initially discovered the vulnerability and detailed its workings, claims that OpenAI’s fix still leaves significant security holes. Notably, Rehberger was able to utilize OpenAI’s new custom GPTs feature to create his own GPT, which could extract data from ChatGPT. The existence of this flaw is concerning, as custom GPTs are being promoted as AI apps similar to how the App Store revolutionized mobile applications.
Rehberger informed OpenAI about the data exfiltration technique back in April, and in November, he provided a detailed account of how he created a custom GPT and carried out the process.
On Wednesday, Rehberger updated his website, stating that OpenAI had patched the leak vulnerability. However, he notes that the fix is not perfect and that ChatGPT still leaks data through the vulnerability he discovered. ChatGPT can still be tricked into unknowingly sending data, albeit in smaller amounts and at a slower rate, making it more noticeable to the user. Rehberger considers it a step in the right direction but acknowledges the remaining issues.
It is important to highlight that the security flaw persists in the ChatGPT apps for iOS and Android, as they have yet to receive an update with the fix.
Users of ChatGPT should exercise caution when utilizing custom GPTs and be skeptical of AI apps from unknown third parties.
Overall, while OpenAI has made efforts to address the data leak issue, it appears that further enhancements are necessary to ensure the security and privacy of ChatGPT users.