China Unveils Advanced AI Chatbots, Raising Concerns Over National Security Threats
China’s technological advancements continue to make headlines as the country unveils its latest artificial intelligence (AI) chatbots. With the introduction of ChatGPT counterparts like Baidu’s Ernie and Alibaba’s Tongyi Qianwen, concerns are mounting over the potential national security threats posed by China’s AI landscape.
Baidu’s Ernie, initially unveiled in March, has now been approved for download, offering users a suite of AI-native apps that explore the core abilities of generative AI, including understanding, generation, reasoning, and memory. Alibaba, on the other hand, announced its AI model, Tongyi Qianwen, on September 13, collaborating with several organizations like OPPO, Taobao, DingTalk, and Zhejiang University to train their own large language models (LLMs) on this technology.
However, experts caution that China’s utilization of cutting-edge AI tools could enhance national security risks for its adversaries. LLMs harness deep learning algorithms that excel at language processing tasks such as recognizing, translating, predicting, and generating text. Notably, China is estimated to be 15 years ahead of the rest of the world in using complex AI LLMs, setting the country apart in terms of digital strength and integration.
As the strategic competition intensifies between the United States and China, Beijing has been actively supporting Chinese companies in the AI field. Mark Bryan Manantan, Director of Cybersecurity and Critical Technologies at the Pacific Forum, highlights the importance of analyzing the regulatory framework put forward by China’s cyberspace administration to grasp their approach to the LLM boom. Their focus on information security, data privacy, and personal information protection aligns with core socialist values and existing laws and policies.
The concern surrounding China’s AI landscape lies partly in the black box nature of these generative AI models. The inner workings and decision-making processes are not easily interpretable or understandable by humans, making the output unpredictable. This opacity raises questions about the ideological impetus behind China’s AI advancements, which other nations may not fully comprehend.
Despite this lack of transparency, it is important to acknowledge that Chinese tech companies are required to report to the regime at every step, suggesting a strategic thrust behind their LLM initiatives. The Chinese government’s control over the dissemination of political ideology through avenues like education diminishes the need for AI to propagate such ideas. Instead, LLMs may offer valuable insights into public sentiment and serve as tools for information auto-filling and cognitive warfare, influencing adversaries’ perceptions, beliefs, and decision-making processes.
China’s military applications of AI also come into focus. Major General Hu Xiaofeng, a professor at China’s National Defense University, acknowledged the inevitable application of cutting-edge AI technologies like ChatGPT in the military field. The ability of LLMs to create persuasive text raises concerns about the potential use of advanced AI in activities such as malware generation hacking and sophisticated phishing.
Censorship is another issue associated with China’s AI development. While not explicitly mentioned by the People’s Liberation Army (PLA) media, the Chinese Communist Party’s tight control over information could limit the overall efficacy of generative AI if negative narratives about the Party are not allowed.
In summary, China’s unveiling of advanced AI chatbots raises valid concerns over national security threats. The country’s lead in utilizing complicated AI LLMs underscores its digital prowess, while the black-box nature of these models and the government’s control add an element of opacity. As China leverages AI for military applications and cognitive warfare, the urgency to understand and address these national security risks becomes paramount.