Cybercriminals are stepping up their game with the use of an advanced AI tool called FraudGPT, dubbed the ‘evil twin’ of ChatGPT. In just one week, this powerful tool has already been used by hackers to steal the identities of numerous unsuspecting internet users. By leveraging phishing pages and harmful code, FraudGPT creates text scams, deceptive emails, and fraudulent documents that trick victims, including both individuals and businesses, into revealing their security details and passwords.
One worrying aspect of FraudGPT is its ability to eliminate the poor spelling and grammar often found in scam emails, making it even more convincing. Experts believe that even novice cybercrooks could cause significant damage with this tool. Rakesh Krishnan, a cybercrime expert, highlights that the emergence of FraudGPT on the dark web indicates a change in the threat landscape. The tool is being sold on various dark web marketplaces and the Telegram platform for £175 per month or £1,300 per year. It has already garnered over 3,000 sales.
FraudGPT, described as a bot without limitations, rules, and boundaries, is being marketed by a verified vendor on dark web platforms such as Alphabay, Empire, and WHM. The creator boasts that the bot has limitless potential, capable of creating undetectable malware and identifying websites vulnerable to credit card fraud.
The increase in phishing attacks is a cause for concern. According to a report by cybersecurity company Egress, 92% of organizations fell victim to phishing attacks in 2022, with 54% experiencing financial losses as a result. This data is based on responses from 500 cybersecurity experts across companies in the UK, US, and Australia. Patrick Harr, an online security expert, points out that fraudsters no longer need to repeatedly create fraudulent emails thanks to tools like FraudGPT.
While organizations can develop AI tools like ChatGPT with ethical safeguards, there is a risk that cybercriminals will find ways to reimplement similar technology without these protections. The constantly evolving landscape of cybercrime underscores the importance of staying vigilant and implementing robust security measures.
As more sophisticated AI tools like FraudGPT become available to cybercriminals, it is crucial for individuals and businesses to remain cautious and adopt proactive security strategies. It is essential to educate employees about the dangers of phishing attacks and encourage the use of multi-factor authentication to protect sensitive data. By continuously updating security protocols, organizations can mitigate the risks posed by AI-powered threats and ensure the safety of both themselves and their customers.