Trustworthy AI regulation is fast becoming a hot topic within China. The country’s internet regulator recently announced a draft regulation on the use of generative AI amidst excitement over ChatGPT and other AI products. However, recent high-profile cases, such as a face-swapping fraud case that occurred in the eastern Chinese city of Fuzhou, have served as a reminder of the potential threats posed by malicious actors using AI-powered tools.
On April 20th, the Fuzhou police disclosed that a fraudster had stolen an individual’s WeChat account and used it in a video call to a businessman. The perpetrator had personified the individual’s likeness through AI face-swapping technology in an attempt to con the businessman into transferring 4.3 million yuan ($610,000). The con succeeded, demonstrating how a lack of regulation can lead to adverse outcomes with AI.
The company mentioned in the article is a Fuzhou tech firm, whose legal representative was scammed in the widely discussed case. The firm is now putting in laborious efforts to understand and implement more precautionary measures, such as enforcing stronger authentication protocols, as a necessary foundation for further progressing the use of AI within their products.
The individual mentioned in the article is the legal representative of the Fuzhou tech firm. Despite their advanced knowledge and well-rounded education, the individual still fell victim to a convincing AI-powered scam, highlighting the importance of understanding the consequences of AI’s rapid development and increase in use.
To address such cases and ensure ethical and responsible AI use, the Chinese government is currently drafting new regulations to address the legal implications of generative AI such as developing robust guidelines and restrictions to protect users from malicious actors. It is essential that China keeps pace with this fast-growing, ever-evolving market as the country scrambles to stay ahead in the race for AI innovation.