Hackers have been found to have the ability to read encrypted chats with AI assistants, according to researchers at Ben-Gurion University. This vulnerability affects cloud-based AI assistants such as Chat-GPT, allowing hackers to intercept and decrypt conversations between users and the AI assistants.
The research revealed that chatbots like Chat-GPT send responses in small tokens broken into parts to expedite the encryption process. However, this method makes it possible for hackers to intercept these tokens and analyze their length, size, and sequence to decrypt the responses.
Yisroel Mirsky, head of the Offensive AI Research Lab, highlighted the severity of the vulnerability, stating that anyone, including malicious actors on the same Wi-Fi network or the internet, can read private chats sent through ChatGPT and similar services without detection.
The researchers suggested two solutions to address this issue: either stop sending tokens individually or pad tokens to the maximum packet length to make them more difficult to analyze.
The vulnerability was confirmed across various platforms, including Microsoft Bing AI (Copilot) and OpenAI’s ChatGPT-4. The researchers successfully deciphered responses from multiple services by exploiting this vulnerability, indicating a widespread security concern in the AI assistant ecosystem.
By addressing these vulnerabilities in AI assistants, users can better protect their privacy and sensitive information from potential attacks and unauthorized access. It is essential for AI developers to prioritize security measures and encryption protocols to prevent such exploits in the future.