Since its introduction, OpenAI’s custom chatbots, known as GPTs, have gained popularity for their wide range of uses. However, a recent discovery by security researchers and technologists has raised concerns about the privacy and security of these chatbots. Experts have found that GPTs can be forced to reveal their secrets, including the initial instructions they were given and the files used to customize them.
This revelation has sparked worries about the potential risks associated with this vulnerability. Jiahao Yu, a computer science researcher at Northwestern University, emphasized the importance of taking privacy concerns seriously. Even if the leaked data doesn’t contain sensitive information, it may contain valuable knowledge that the designer did not intend to share.
According to Yu, obtaining information from GPTs proved to be surprisingly straightforward during their testing. They achieved a 100 percent success rate for file leakage and a 97 percent success rate for system prompt extraction using relatively simple prompts that didn’t require specialized knowledge.
Creating custom GPTs is designed to be user-friendly, with OpenAI subscription holders able to easily create their own AI agents. However, this ease of use also poses potential risks. Users can connect third-party APIs to their custom GPTs, allowing them to access more data and perform a wider range of tasks.
While the data given to custom GPTs may often be inconsequential, there are cases where it may be sensitive and include domain-specific insights or confidential information such as salary details. Some GitHub pages even list leaked instructions given to custom GPTs, providing transparency about their workings, albeit unintentionally.
Apart from the risk of leaked data, there are concerns about prompt injections and indirect prompt injections. Prompt injections refer to instructing the chatbot to behave in ways it wasn’t designed for, while indirect prompt injections involve manipulating websites to change the AI’s behavior. These vulnerabilities could potentially lead to the theft of sensitive information, including credit card details.
OpenAI claims to prioritize user data privacy and continually work on enhancing the safety and robustness of their models and products. Researchers have reported their findings to OpenAI, who have since made efforts to mitigate some of the vulnerabilities. However, there is still a need for increased awareness among users about the potential privacy risks associated with prompt injections, as well as improved warnings from companies like OpenAI.
As the use of custom GPTs continues to grow, experts stress the importance of considering privacy implications and implementing stronger safeguards. With the integration of ChatGPT into products that browse and interact with the internet, it becomes crucial to address the vulnerabilities that arise when AI virtual assistants scrape data from the web. This includes guarding against indirect prompt injections and ensuring that prompt injections no longer work.
The evolving landscape of AI chatbots calls for a balance between convenience and privacy. As technology marches forward, it will be essential for companies like OpenAI to prioritize not only the usefulness and performance of their models but also the privacy and security of user data.