Hackers can use ChatGPT, an artificial intelligence (AI) tool, to access browser recovery phrases and potentially steal funds through malicious software and browser extensions. In response to this form of cybercrime, the social media company Meta shared 1,000 malicious links and browser extensions to be wary of. ChatGPT only exists as a web interface and does not have a mobile app or browser extension, and after Meta detected malicious behavior around ChatGPT, hackers pivoted to Google Bard, another AI tool.
Criminals can launch convincing phishing emails via ChatGPT, and the victims of crypto crimes stemming from such emails reported losing about $3 billion in 2022. Aside from this, artificial intelligence can also write malicious software for criminals, and some notable tech leaders have recently signed an open letter calling for sound AI governance. Coinbase CEO Brian Armstrong, however, still supports AI innovation.
In order to help crypto users stay informed, the company Crypto.com has introduced Amy, an AI assistant, which will draw on ChatGPT’s language model while only providing non-financial advice. FalconX will soon launch a similar AI tool called Satoshi, which can provide advice on both the cheapest trades and asset evaluations. Despite this, it is important to note that AI models can still create misinformation due to the so-called hallucination problem.
Meta Platforms is a social media company focused on disrupting malicious activity, especially related to the fraudulent use of AI tools. The company was the first to detect the malicious behavior around ChatGPT and take steps to warn users and prevent similar cybercriminals from succeeding in the future. It is a leader in the industry and makes sure users of its products and services stay protected.