AI Tools ChatGPT and Bard Vulnerable to Scammers, Posing New Threats, Investigation Reveals

Date:

AI Tools ChatGPT and Bard Vulnerable to Scammers, Posing New Threats, Investigation Reveals

Con artists and scammers are exploiting the vulnerabilities of AI tools like ChatGPT and Bard, according to a recent investigation conducted by consumer advocacy group Which?. The investigation found that these AI chatbots lacked effective defenses to prevent fraudsters from unleashing convincing scams. This poses a significant threat to consumers’ cybersecurity.

A key way for individuals to identify scam emails and texts is the poor grammar and spelling often present in these fraudulent messages. However, the investigation by Which? discovered that scammers could easily use AI to create messages that convincingly impersonated legitimate businesses, making it more challenging for consumers to spot the fraud.

The City of London Police estimates that over 70% of fraud experienced by UK victims could involve an international component, with fraudsters based overseas collaborating with offenders in the UK or solely driving the scams from outside the country. AI chatbots enable these fraudsters to send professional-looking emails regardless of their location.

During the investigation, Which? asked ChatGPT to create a phishing email from PayPal using the latest free version. However, the AI tool refused to assist with the request. When the researchers removed the term phishing and asked the bot to write an email instead, it generated an apparently professionally written email that mimicked a legitimate PayPal security notice. It included steps on how to secure a PayPal account, links to reset passwords, and contact customer support. However, scammers could easily modify these templates to redirect recipients to malicious websites.

See also  Freshworks CEO Girish Mathrubootham: My Employees Leverage ChatGPT For Enhanced Productivity at Work

Similarly, when Which? asked Bard to create a phishing email impersonating PayPal, the AI tool initially declined to assist. But when the researchers rephrased their request and asked Bard to create an email about an unauthorized login to a recipient’s PayPal account, it provided steps for changing the account password securely, making the scam message look genuine. Bard also suggested where to include a link, such as a PayPal Login Page, that could redirect recipients to phishing websites.

Both ChatGPT and Bard were tested in creating scam text messages as well. The results revealed that these AI tools were capable of generating compelling texts for phishing scams, including instructions on where to insert links for redelivery.

The investigation by Which? highlights the vulnerability of AI tools like ChatGPT and Bard to being misused by fraudsters for malicious purposes. This raises concerns about the need for stronger safeguards to protect individuals from scams facilitated by AI. It is imperative for the upcoming AI summit convened by the government to address these existing harms rather than solely focusing on long-term risks associated with frontier AI technologies.

To ensure their safety, consumers should remain vigilant and avoid clicking on suspicious links in emails and texts. They should also consider signing up for Which?’s free scam alert service to stay informed about the latest scams and outsmart fraudsters.

Commenting on the investigation, Rocio Concha, Which? Director of Policy and Advocacy, emphasized the urgency of addressing the risks posed by AI tools that can be exploited for scams. She called for immediate action to protect people from the harms occurring in the present, rather than solely focusing on potential risks in the distant future.

See also  The 4 things ChatGPT declines to do and the one impossible task it encounters

The investigation highlights the need for AI developers to implement robust defenses against scams and fraudulent activities. However, a Google spokesperson stated that their AI tool Bard has policies in place to prevent the use of generating content for deceptive or fraudulent purposes, but also acknowledged the importance of continuously improving the safeguards.

OpenAI, the developer of ChatGPT, did not respond to Which?’s request for comment.

As AI technology continues to advance, it is crucial for developers, policymakers, and authorities to collaborate in developing effective safeguards to protect individuals from the growing threats posed by scammers exploiting AI tools.

Frequently Asked Questions (FAQs) Related to the Above News

What are ChatGPT and Bard?

ChatGPT and Bard are AI tools developed by OpenAI and Google, respectively. They are advanced chatbot models that utilize artificial intelligence to generate human-like text responses.

What did the investigation by Which? reveal about these AI tools?

The investigation conducted by Which? revealed that ChatGPT and Bard lacked effective defenses to prevent scammers from using them to create convincing scams. These vulnerabilities allow fraudsters to exploit the AI tools to create messages that impersonate legitimate businesses and deceive consumers.

How do scammers use these AI tools to carry out their fraudulent activities?

Scammers can use AI tools like ChatGPT and Bard to generate professional-looking emails and text messages that mimic legitimate communication from reputable companies. They can create phishing scams that trick recipients into revealing personal information or visiting malicious websites.

What risks do these vulnerabilities pose to consumers?

The vulnerabilities of ChatGPT and Bard make it more difficult for consumers to identify fraudulent messages. Scammers can create convincing scams that may bypass traditional indicators of fraud, such as poor grammar and spelling. This puts consumers' cybersecurity at risk and increases the chances of falling victim to scams.

What precautions should consumers take to protect themselves from scams facilitated by AI tools?

Consumers should remain vigilant and exercise caution when receiving emails or text messages, especially those requesting personal information or containing suspicious links. It is important to avoid clicking on such links and consider signing up for scam alert services, like the one provided by Which?, to stay informed about the latest scams and protect themselves from fraudsters.

How did AI developers respond to the investigation's findings?

Google, the developer of Bard, stated that they have policies in place to prevent the use of their AI tool for fraudulent purposes. They acknowledged the importance of continuously improving safeguards. OpenAI, the developer of ChatGPT, did not respond to Which?'s request for comment.

What actions have consumer advocacy groups and authorities called for?

Consumer advocacy groups and authorities have emphasized the need for stronger safeguards and regulations to protect individuals from scams facilitated by AI tools like ChatGPT and Bard. They call for immediate action to address the present harms rather than solely focusing on long-term risks associated with AI technologies.

What is the significance of the investigation's findings?

The investigation reveals the vulnerability of AI tools to being misused by fraudsters and highlights the urgent need for robust defenses against scams and fraudulent activities. It underscores the importance of collaboration among developers, policymakers, and authorities to ensure effective safeguards in the face of growing threats posed by scammers exploiting AI tools.

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Apple in Talks with Meta for Generative AI Integration: Wall Street Journal

Apple in talks with Meta for generative AI integration, a strategic move to catch up with AI rivals. Stay updated with Wall Street Journal.

IBM Stock Surges as Analyst Forecasts $200 Price Target Amid AI Shift

IBM shares surge as Goldman Sachs initiates buy rating at $200 target, highlighting Generative AI potential. Make informed investment decisions.

NVIDIA Partners with Ooredoo for AI Deployment in Middle East

NVIDIA partners with Ooredoo to deploy AI solutions in Middle East, paving the way for cutting-edge technology advancements.

IBM Shares Surge as Goldman Sachs Initiates Buy Rating at $200 Target, Highlights Generative AI Potential

IBM shares surge as Goldman Sachs initiates buy rating at $200 target, highlighting Generative AI potential. Make informed investment decisions.