AI Tools ChatGPT and Bard Vulnerable to Scammers, Posing New Threats, Investigation Reveals
Con artists and scammers are exploiting the vulnerabilities of AI tools like ChatGPT and Bard, according to a recent investigation conducted by consumer advocacy group Which?. The investigation found that these AI chatbots lacked effective defenses to prevent fraudsters from unleashing convincing scams. This poses a significant threat to consumers’ cybersecurity.
A key way for individuals to identify scam emails and texts is the poor grammar and spelling often present in these fraudulent messages. However, the investigation by Which? discovered that scammers could easily use AI to create messages that convincingly impersonated legitimate businesses, making it more challenging for consumers to spot the fraud.
The City of London Police estimates that over 70% of fraud experienced by UK victims could involve an international component, with fraudsters based overseas collaborating with offenders in the UK or solely driving the scams from outside the country. AI chatbots enable these fraudsters to send professional-looking emails regardless of their location.
During the investigation, Which? asked ChatGPT to create a phishing email from PayPal using the latest free version. However, the AI tool refused to assist with the request. When the researchers removed the term phishing and asked the bot to write an email instead, it generated an apparently professionally written email that mimicked a legitimate PayPal security notice. It included steps on how to secure a PayPal account, links to reset passwords, and contact customer support. However, scammers could easily modify these templates to redirect recipients to malicious websites.
Similarly, when Which? asked Bard to create a phishing email impersonating PayPal, the AI tool initially declined to assist. But when the researchers rephrased their request and asked Bard to create an email about an unauthorized login to a recipient’s PayPal account, it provided steps for changing the account password securely, making the scam message look genuine. Bard also suggested where to include a link, such as a PayPal Login Page, that could redirect recipients to phishing websites.
Both ChatGPT and Bard were tested in creating scam text messages as well. The results revealed that these AI tools were capable of generating compelling texts for phishing scams, including instructions on where to insert links for redelivery.
The investigation by Which? highlights the vulnerability of AI tools like ChatGPT and Bard to being misused by fraudsters for malicious purposes. This raises concerns about the need for stronger safeguards to protect individuals from scams facilitated by AI. It is imperative for the upcoming AI summit convened by the government to address these existing harms rather than solely focusing on long-term risks associated with frontier AI technologies.
To ensure their safety, consumers should remain vigilant and avoid clicking on suspicious links in emails and texts. They should also consider signing up for Which?’s free scam alert service to stay informed about the latest scams and outsmart fraudsters.
Commenting on the investigation, Rocio Concha, Which? Director of Policy and Advocacy, emphasized the urgency of addressing the risks posed by AI tools that can be exploited for scams. She called for immediate action to protect people from the harms occurring in the present, rather than solely focusing on potential risks in the distant future.
The investigation highlights the need for AI developers to implement robust defenses against scams and fraudulent activities. However, a Google spokesperson stated that their AI tool Bard has policies in place to prevent the use of generating content for deceptive or fraudulent purposes, but also acknowledged the importance of continuously improving the safeguards.
OpenAI, the developer of ChatGPT, did not respond to Which?’s request for comment.
As AI technology continues to advance, it is crucial for developers, policymakers, and authorities to collaborate in developing effective safeguards to protect individuals from the growing threats posed by scammers exploiting AI tools.