Much has been written about ChatGPT and its extraordinary capabilities, so impressive that they’ve managed to scare a large part of the population. The chatbot can do it all, even create other AI-powered assistants, an option that many cybercriminals are now considering.
In November, OpenAI launched a new tool, GPT Builder, allowing users to create their own assistants from scratch. GPT Builder marks the next significant step taken by OpenAI, resembling the creation of an app store-like service.
However, a study conducted by BBC News revealed that GPT Builder can be used to create tools for cybercrime. In this instance, BBC News used it to create a program (dubbed Crafty Emails) capable of writing and sending convincing texts for use in scams.
The bot was trained using texts and social engineering resources provided by BBC News. Thanks to this, the bot was able to craft quite convincing writings, all designed for use in scams. To top it off, it even managed to design its own logo. And the best part: no programming was necessary at any point during this process. This is one of the most appealing features of GPT Builder: it only requires using natural language to create an assistant.
Crafty Emails successfully recreated some of the most common scams found on the internet. For instance, it portrayed the famous Nigerian prince with an emotionally appealing narrative that resonated with readers.
According to the BBC, OpenAI responded after Crafty Emails was published, stating that the company is continuously improving security measures based on how people use our products. We do not want our tools to be used for malicious purposes.