A man in China has reportedly been detained for allegedly using the popular artificial intelligence chatbot ChatGPT to generate and promote fake news. According to a report by SCMP, officials from a county police bureau were able to track the fake news of a supposed train accident that killed nine people back to the suspect’s company.
The suspect is believed to have committed the offense of “picking quarrels and provoking trouble” which could carry a maximum prison sentence of five or even ten years if the crime is deemed particularly severe.
ChatGPT’s language model allows it to produce various responses to prompts and has been proven capable of creating news articles. This has been exemplified by German magazine editor, Anne Hoffmann, who was dismissed for publishing an AI-generated interview with former Formula One champion Michael Schumacher. On April Fools Day last year, ChatGPT was even used to produce a satiric article about Krablr, a real-time crab pricing engine, developing a generative AI tool that supposedly speaks to crabs and urges them to breed more and boost yields.
The misuse of artificial intelligence to spread misinformation and deceitful information is not exclusive to China, with many other countries also seeking to regulate its use. China’s Internet regulator has implemented measures such as “clear labeling” of videos and images created with AI and published in the public domain. This case in China is a reminder of the risks posed by the expansion of AI technology and the danger that malicious people may exploit it for their own gain.
The man in question is believed to have worked for a company that owned some personal media platforms scattered across southern China. His residence was searched and a computer was found and taken into custody. Unfortunately the access of the ChatGPT chatbot may have been possible to the suspect via a Virtual Private Network (VPN) which can bypass the Chinese government’s internet censorship.
It remains to be seen what punishment await the suspect, but it serves as a reminder of the potential dangers hidden within artificial intelligence, and the increasing demand for better regulations and standards when it comes to the public usage of AI technology as a journalistic tool.