In March, OpenAI released an updated version of its conversation-based chatbot ChatGPT in GPT-4. Less than a day later, ChatGPT was broken by Alex Polyakov. After sitting down in front of his computer, the security expert managed to bypass OpenAI’s protections.
To show just how fast and successful the hacking attempt was, the CEO of Adversa AI tested ChatGPT by making it produce bigoted comments, initiate phishing emails, and even motivate violence.
Unsurprisingly, many security researchers, technicians, and computer scientists are now researching fast injection attacks and jailbreaks for ChatGPT and other AI systems. Similarly, prompting attacks can secretly include malicious data or commands into AI models.
The goal of jailbreaking is to construct prompts that prevent chatbots from introducing hateful speech or referencing unlawful activities. In the example of Alex Polyakov, he is one of the few individuals working on prompt injection attacks and jailbreaks for ChatGPT and other Generative AI systems.
Recently, Polyakov made a “universal” jailbreak compatible with large language models that include GPT-4, Microsoft’s Bing chat system, Google’s Bard, and Anthropic’s Claude. This clearly shows how widespread this issue really is.
Using a game between two character conversations, Tom and Jerry, Polyakov was able to get the language models to create comprehensive protocol on how to make meth and hotwire a car. Basically, Tom was talking about “hotwiring” or “production” and Jerry was discussing “car” or “meth”.
The alarming part is that such “toy” jailbreak examples can be used to perform criminal activities and cyberattacks which are very hard to detect or prevent. Arvind Narayanan, professor at Princeton University, confirmed this. He discussed how language model-based personal assistants, like scanning emails for calendar invitations, can be taken advantage of if attacked successfully. This could lead to an unwanted internet worm quickly spreading to many contacts.
OpenAI is an innovative artificial intelligence research laboratory that has introduced a long list of groundbreaking research. Founded by American entrepreneurs and philanthropists, this Massachusetts-based company is well known for its language models, recommender systems, robotics, computer vision, and more.
Alex Polyakov is a computer scientist, security expert, and researcher famous for breaking OpenAI’s ChatGPT chatbot. With prompt injection attacks and jailbreaks for ChatGPT and other Generative AI systems, Polyakov has taken strides towards better cybersecurity. He is respected for finding the potential loopholes in language models, understanding the risks, and developing the fixes for them.