Four cyber attackers in China have been arrested for developing ransomware with the help of ChatGPT, the first such case in the country involving the popular chatbot that is not officially available locally.
The attack was first reported by an unidentified company in Hangzhou, capital of eastern Zhejiang province, which had its systems blocked by ransomware, according to a Thursday report by state-run Xinhua News Agency. The hackers demanded 20,000 Tether, a cryptocurrency stablecoin pegged one-to-one to the US dollar, to restore access.
The police in late November arrested two suspects in Beijing and two others in Inner Mongolia, who admitted to “writing versions of ransomware, optimising the program with the help of ChatGPT, conducting vulnerability scans, gaining access through infiltration, implanting ransomware, and carrying out extortion”, the report said.
The report did not mention whether the use of ChatGPT was part of the charges. It exists in a legal grey area in China, as Beijing has sought to curb access to foreign generative artificial intelligence products.
After OpenAI introduced its chatbot at the end of 2022, igniting an arms race in the field among tech giants, ChatGPT and similar products gained interest among Chinese users. However, OpenAI has blocked internet protocol addresses in China, Hong Kong and sanctioned markets like North Korea and Iran. Some users get around restrictions using virtual private networks (VPNs) and a phone number from a supported region.
On the commercial side, there are “compliance risks” for domestic companies that build or rent VPNs to access OpenAI’s services, including ChatGPT and text-to-image generator Dall-E, according to a report by law firm King & Wood Mallesons.
Legal cases involving generative AI have increased given the popularity of the technology. In February, Beijing police warned that ChatGPT could “commit crimes and spread rumours”.
In May, police in northwestern Gansu province detained a man who allegedly used ChatGPT to generate fake news about a train crash and disseminated it online, which received more than 15,000 clicks.
In August, Hong Kong police arrested six people in a crackdown on a fraud syndicate that used deepfake technology to create doctored images of identification documents used for loan scams targeting banks and moneylenders.
Controversies around the technology are arising overseas, as well. Brian Hood, mayor of Hepburn Shire in Australia, sent a legal notice to OpenAI in March after ChatGPT wrongly implicated him in a bribery and corruption scandal.
The US Federal Trade Commission issued a warning this year about scammers weaponising AI-cloned voices to impersonate people, which can be accomplished with just a short audio clip of a person’s voice.
More recently, people and organizations whose work was used to train large language models have been pushing back against what they see as mass intellectual property infringement. In a case expected to be closely watched for its legal implications, The New York Times this week sued OpenAI and Microsoft, the AI firm’s main backer, alleging that the companies’ powerful models used millions of articles for training without permission.