OpenAI’s highly anticipated Mac app, ChatGPT, has recently been found to be saving conversations as plain text, potentially exposing users’ chats to anyone who gains access to their machines. This issue was initially brought to light by Threads user Pedro José Pereira Vieito, who discovered that the app was storing conversations in a non-protected location on macOS.
According to Vieito, the lack of sandboxing in the app means that any running app, process, or malware could potentially access and read all ChatGPT conversations without requiring permission. This security vulnerability raises concerns about the privacy and security of users’ conversations, especially considering that macOS has had built-in defenses against unauthorized access to private user data since Mojave 10.14.
While ChatGPT has since updated the app to encrypt local chats, the messages are still not sandboxed, leaving them potentially vulnerable to unauthorized access. Users are advised to ensure they are using the most recent version of the app to mitigate the risk of their conversations being compromised.
The discovery of this security flaw comes just days after news broke that OpenAI was hacked last year, further underscoring the importance of robust security measures in AI-powered applications. As technology continues to advance, ensuring the privacy and security of user data must remain a top priority for developers and companies alike.