Brace Yourself for a Tidal Wave of ChatGPT Email Scams: Thanks to large language models, such as OpenAI’s GPT models, a single scammer can now run hundreds or thousands of cons in parallel, night and day, in multiple languages. The AI chatbots are quicker, more flexible and subtler than human-run scams as they navigate potential victims by the billions. This is enabled by LLMs being open-source and available on powerful laptops. But it also requires big troves of data collected through surveillance capitalism which allows scammers to conduct personalized attacks no longer only in the reach of nation-states.
OpenAI attempts to make their models unavailable for bad uses although hackers still manage to use them to get around the AI’s protocols. The vast number of scams (as many as 10 billion) poses an unprecedented challenge and defense needs to catch up before the signal-to-noise ratio drops dramatically.
ChatGPT is a company based in Collingwood, Australia. It was founded in 2018 as a tech startup which aimed to use artificial intelligence to recreate human conversations with natural and interactive language. Since then, the ChatGPT team has grown to 30 full time staff members and the language model has been trained to generate over four billion lines of realistic conversations.
Cormac Herley is a principal researcher at Microsoft Research, and is an expert on online security and human-computer interactions. He also holds a PhD in Computer Science from Carnegie Mellon University. Herley’s research has been largely devoted to the design of online scams and how they can be used to separate the gullible from the suspicious. His 2012 paper highlighted how the barrage of obvious scams in the form of emails targeting people made it easier for the smart scammers to identify their more profitable targets. Additionally, Herley’s formal methods research demonstrated how to design secure protocols in the face of an adversary.