A feature of OpenAI’s ChatGPT, which allows users to create their own AI assistants, has been exploited to develop tools for cyber-crime, according to a recent investigation by BBC News. The feature, launched by OpenAI last month, enables users to build customized versions of ChatGPT for various purposes. BBC News used this feature to create a generative pre-trained transformer that could craft convincing emails, texts, and social media posts for scams and hacks. The investigation raised concerns about the potential misuse of AI tools, as OpenAI’s moderation of the developer versions of ChatGPT appeared to be less rigorous compared to the public versions of the program.
To test the capabilities of ChatGPT, BBC News paid for the service and created a bespoke AI bot called Crafty Emails. They instructed the bot to write text using techniques aimed at tricking people into clicking on links or downloading malicious content. Crafty Emails absorbed resource materials on social engineering within seconds, even creating a logo for the bot, without the need for coding or programming. The bot then proceeded to produce highly convincing text for common scam and hack techniques in multiple languages and within seconds.
In contrast, the public version of ChatGPT refused to generate most of the requested scam content, often accompanied by disclaimers stating that scam techniques were unethical. Despite repeated requests, OpenAI did not respond to BBC News’ inquiries for comment or clarification on the matter.
OpenAI announced their plans to launch an App Store-like service for GPTs during their developer conference in November. This service would allow users to share and monetize their creations; however, the company claimed they would carefully review the GPTs to prevent fraudulent activities. Nevertheless, experts argue that OpenAI’s moderation of these developer versions of ChatGPT appears to be less stringent compared to their control over public versions, potentially granting criminals access to cutting-edge AI tools.
BBC News put Crafty Emails to the test by asking the bot to generate content for five well-known scam and hack techniques; however, none of the content was sent or shared. Crafty Emails successfully generated convincing text for scams such as the Hi Mum text or WhatsApp scam and even adapted the text to a Hindi version, tailoring it to the cultural relevance of India. Crafty Emails also successfully generated scam emails, encouraging people to click on compromising links and participate in fictitious giveaways. However, the public version of ChatGPT rejected these requests.
Scams involving Bitcoin giveaways on social media have become increasingly common in recent years, often resulting in substantial financial losses. Crafty Emails drafted a tweet mimicking the tone of a cryptocurrency enthusiast, complete with hashtags and emojis, to promote such a faux giveaway. Once again, the regular version of ChatGPT refused to generate this content.
Another common attack, called spear-phishing, involves sending emails that persuade recipients to download malicious attachments or visit dangerous websites. Crafty Emails excelled in generating spear-phishing emails by warning a fictional company executive about a data risk and enticing them to download a compromised file. The bot even effortlessly translated the email into Spanish and German. In contrast, the public version of ChatGPT provided less detailed text without explanations about how to successfully deceive individuals.
Jamie Moles, senior technical manager at ExtraHop, a cybersecurity company, highlighted the lesser moderation applied to bespoke versions of ChatGPT. Moles noted that with customized bots, users could define their own rules of engagement for the AI they build, indicating a possible deficiency in OpenAI’s moderation process.
The malicious use of AI has become a growing concern, leading authorities worldwide to issue warnings regarding this issue. Scammers have already started employing large language models (LLMs) to create more convincing scams by overcoming language barriers. LLMs such as WolfGPT, FraudBard, and WormGPT are already being used unlawfully. The concern surrounding OpenAI’s GPT Builders lies in the possibility of providing criminals with incredibly advanced AI bots.
Javvad Malik, a security awareness advocate at KnowBe4, expressed concerns about allowing uncensored AI responses, suggesting that it could be a goldmine for criminals. While OpenAI has successfully implemented measures to secure their technology in the past, it remains unknown to what extent they can control custom GPTs.
The misuse of AI tools, as demonstrated in the BBC News investigation, highlights the potential risks of providing access to powerful AI technology without adequate oversight and moderation. As the use of AI in cyber-crime becomes more prevalent, it is crucial for organizations such as OpenAI to prioritize effective monitoring and control measures to ensure the technology is not abused by malicious actors.