Title: AI-Aided Scam Wave Threat via ChatGPT and Bard Raises Concerns among Consumer Groups
Online fraudsters could potentially exploit large language models, such as ChatGPT and Bard, to launch convincing scams, warns consumer group Which?. These AI-powered programs currently lack effective defenses, making it easier for criminals to create messages that convincingly impersonate businesses and official bodies.
Traditionally, consumers often detect scam emails and texts due to their poor grammar and spelling. However, ChatGPT and Bard have the ability to generate messages that are flawlessly written and appear legitimate. The City of London Police estimates that over 70% of fraud cases involving UK victims may have an international component, and AI services enable fraudsters to send professional-looking emails from anywhere in the world.
In an investigation conducted by Which?, experts discovered that both ChatGPT and Bard have some safeguards in place. However, these safeguards are easily circumvented by fraudsters. In a test scenario, researchers asked ChatGPT to create an email notifying the recipient that someone had logged into their PayPal account. Within seconds, ChatGPT generated a professionally written email titled Important Security Notice – Unusual Activity Detected on Your PayPal Account. The email included detailed steps on securing the account and links to reset the password and contact customer support. Unfortunately, these links could potentially redirect recipients to malicious websites set up by fraudsters.
Similarly, Which? found that Bard could be used to create counterfeit security email alerts, directing victims to fake websites specifically designed to collect personal and security information.
Rocio Concha, the director of policy and advocacy at Which?, emphasized that the AI platforms like ChatGPT and Bard are currently failing to protect users from fraudulent activities. Concha called for the government’s upcoming AI summit to prioritize addressing the immediate risks posed by this technology rather than solely focusing on long-term concerns associated with frontier AI. In the meantime, consumers are urged to exercise caution and avoid clicking on suspicious links in emails and texts, even if they seem legitimate.
When approached for comment, Google stated that it has policies against content generation for deceptive or fraudulent activities like phishing. While the issue of generative AI producing negative results affects all large language models, Google assured that Bard includes important safeguards that will be further improved over time.
OpenAI, the creator of ChatGPT, did not respond to Which?’s request for comment.
As concerns arise regarding the potential exploitation of AI technology for fraudulent activities, it is important for individuals to remain vigilant and cautious in their online interactions. Scammers are becoming increasingly sophisticated, and it is crucial for both technology providers and users to implement robust security measures to protect against these evolving threats.
In other news, a recent survey conducted by the Institution of Engineering and Technology revealed that most Britons underestimate the extent to which they interact with AI technology in their daily lives. While over half of the respondents claimed to use AI once a day or less, almost two-thirds reported engaging in various online activities that rely on AI on a daily basis. The survey highlights the need for greater awareness and understanding of the pervasive role of AI in everyday tools and applications.
As AI continues to shape our digital landscape, it is essential for individuals to educate themselves about its capabilities and potential risks. The responsible development and use of AI technologies, coupled with heightened consumer awareness, can help mitigate the threat of AI-enabled scams and ensure a safer online environment for all.