Consumer Group Warns AI-Aided Scam Wave Threat via ChatGPT and Bard

Date:

Title: AI-Aided Scam Wave Threat via ChatGPT and Bard Raises Concerns among Consumer Groups

Online fraudsters could potentially exploit large language models, such as ChatGPT and Bard, to launch convincing scams, warns consumer group Which?. These AI-powered programs currently lack effective defenses, making it easier for criminals to create messages that convincingly impersonate businesses and official bodies.

Traditionally, consumers often detect scam emails and texts due to their poor grammar and spelling. However, ChatGPT and Bard have the ability to generate messages that are flawlessly written and appear legitimate. The City of London Police estimates that over 70% of fraud cases involving UK victims may have an international component, and AI services enable fraudsters to send professional-looking emails from anywhere in the world.

In an investigation conducted by Which?, experts discovered that both ChatGPT and Bard have some safeguards in place. However, these safeguards are easily circumvented by fraudsters. In a test scenario, researchers asked ChatGPT to create an email notifying the recipient that someone had logged into their PayPal account. Within seconds, ChatGPT generated a professionally written email titled Important Security Notice – Unusual Activity Detected on Your PayPal Account. The email included detailed steps on securing the account and links to reset the password and contact customer support. Unfortunately, these links could potentially redirect recipients to malicious websites set up by fraudsters.

Similarly, Which? found that Bard could be used to create counterfeit security email alerts, directing victims to fake websites specifically designed to collect personal and security information.

Rocio Concha, the director of policy and advocacy at Which?, emphasized that the AI platforms like ChatGPT and Bard are currently failing to protect users from fraudulent activities. Concha called for the government’s upcoming AI summit to prioritize addressing the immediate risks posed by this technology rather than solely focusing on long-term concerns associated with frontier AI. In the meantime, consumers are urged to exercise caution and avoid clicking on suspicious links in emails and texts, even if they seem legitimate.

See also  Meta Platforms Expands Reality Labs AR/VR Business as Profits Surge

When approached for comment, Google stated that it has policies against content generation for deceptive or fraudulent activities like phishing. While the issue of generative AI producing negative results affects all large language models, Google assured that Bard includes important safeguards that will be further improved over time.

OpenAI, the creator of ChatGPT, did not respond to Which?’s request for comment.

As concerns arise regarding the potential exploitation of AI technology for fraudulent activities, it is important for individuals to remain vigilant and cautious in their online interactions. Scammers are becoming increasingly sophisticated, and it is crucial for both technology providers and users to implement robust security measures to protect against these evolving threats.

In other news, a recent survey conducted by the Institution of Engineering and Technology revealed that most Britons underestimate the extent to which they interact with AI technology in their daily lives. While over half of the respondents claimed to use AI once a day or less, almost two-thirds reported engaging in various online activities that rely on AI on a daily basis. The survey highlights the need for greater awareness and understanding of the pervasive role of AI in everyday tools and applications.

As AI continues to shape our digital landscape, it is essential for individuals to educate themselves about its capabilities and potential risks. The responsible development and use of AI technologies, coupled with heightened consumer awareness, can help mitigate the threat of AI-enabled scams and ensure a safer online environment for all.

Frequently Asked Questions (FAQs) Related to the Above News

What are ChatGPT and Bard?

ChatGPT and Bard are large language models powered by artificial intelligence. They are designed to generate human-like text and are used for various purposes, including customer service chatbots and content creation.

How are scammers using ChatGPT and Bard to launch scams?

Scammers exploit these AI-powered programs by generating convincing messages that impersonate businesses and official bodies. They take advantage of the flawlessly written and legitimate-looking text generated by ChatGPT and Bard to trick people into providing personal and security information or clicking on malicious links.

How do consumers usually detect scam emails and texts?

Traditionally, consumers have relied on poor grammar and spelling as indicators of scam emails and texts. However, with ChatGPT and Bard generating flawless text, these traditional indicators become less reliable.

Do ChatGPT and Bard have any safeguards in place to prevent scams?

ChatGPT and Bard do have some safeguards, but they can be easily circumvented by fraudsters. These safeguards are not currently effective in preventing scams.

What was the result of Which?'s investigation into ChatGPT and Bard?

Which? discovered that both ChatGPT and Bard can be used to create convincing scam messages. In a test scenario, ChatGPT generated a professional-looking email impersonating PayPal, including links to reset passwords, while Bard created counterfeit security email alerts directing victims to fake websites.

What does Rocio Concha from Which? emphasize regarding the protection provided by AI platforms?

Rocio Concha emphasized that ChatGPT and Bard currently fail to protect users from fraudulent activities. She called for prioritizing the immediate risks associated with these technologies rather than solely focusing on long-term concerns.

What precautionary measures are consumers urged to take?

Consumers are urged to exercise caution and avoid clicking on suspicious links in emails and texts, even if they appear legitimate. It is important to be vigilant and skeptical of messages received online.

How is Google addressing the issue of AI-generated deceptive content?

Google has policies against content generation for deceptive or fraudulent activities like phishing. Google mentioned that Bard, one of their language models, includes safeguards that will be further improved over time.

What is the response from OpenAI, the creator of ChatGPT?

OpenAI did not respond to Which?'s request for comment.

What did the recent survey by the Institution of Engineering and Technology reveal?

The survey found that most Britons underestimate the extent to which they interact with AI technology in their daily lives. While over half of the respondents claimed to use AI once a day or less, almost two-thirds reported engaging in various online activities that rely on AI on a daily basis.

What is the importance of consumer awareness and understanding of AI?

As AI continues to play a significant role in our digital landscape, it is crucial for individuals to educate themselves about its capabilities and potential risks. Greater awareness and understanding can help mitigate the threat of AI-enabled scams and contribute to a safer online environment.

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

WhatsApp Unveils New AI Feature: Generate Images of Yourself Easily

WhatsApp introduces a new AI feature, allowing users to easily generate images of themselves. Revolutionizing the way images are interacted with on the platform.

India to Host 5G/6G Hackathon & WTSA24 Sessions

Join India's cutting-edge 5G/6G Hackathon & WTSA24 Sessions to explore the future of telecom technology. Exciting opportunities await! #IndiaTech #5GHackathon

Wimbledon Introduces AI Technology to Protect Players from Online Abuse

Wimbledon introduces AI technology to protect players from online abuse. Learn how Threat Matrix enhances player protection at the tournament.

Hacker Breaches OpenAI, Exposes AI Secrets – Security Concerns Rise

Hacker breaches OpenAI, exposing AI secrets and raising security concerns. Learn about the breach and its implications for data security.