A smartphone displaying the logo of the artificial intelligence OpenAI research laboratory in Manta, near Turin, Italy, on Oct. 4, 2023. (Marco Bertorello/AFP via Getty Images)
ChatGPT, the popular AI chatbot released by OpenAI in late 2022, is suspected of censoring China-related topics and manipulating information in translation.
Activist Alleges Censorship by ChatGPT on China-Related Topics
A pro-democracy activist has accused ChatGPT, the AI chatbot developed by OpenAI, of censoring China-related topics and manipulating information. In a recent post, Aaron Chang, known as Sydney Winnie on X (formerly known as Twitter), expressed his concerns about the chatbot’s refusal to generate an image of Tiananmen Square. He questioned if ChatGPT had received funding from the Chinese Communist Party (CCP). ChatGPT cited certain guidelines in its system to justify its decision, stating that it aimed to avoid potential disputes or misunderstandings.
Translation and Content Manipulation Concerns
Aside from image censorship, ChatGPT faces criticism for its handling of China-related translations. Alice, a media professional who uses ChatGPT for translation work, presented an example where the AI tool made changes and omissions to a text critical of Beijing’s poverty elimination policy. ChatGPT condensed a six-paragraph Chinese text into a three-paragraph English text, excluding important details like the name of CCP leader Xi Jinping. This non-transparent process raises concerns about potential bias in text generation by black box AI systems.
Calls for Transparency and User Diversity
AI researcher Sahar Tahvili highlights the need for transparency in ChatGPT’s decision-making process. The lack of clarity about internal workings and references in the black box model poses risks for biased text generation. Tahvili suggests that increased user diversity, especially in different languages like Chinese, can aid developers in improving the accuracy and fairness of ChatGPT’s responses. However, she also notes that Chinese regulators have restricted access to ChatGPT due to their concerns about generating sensitive questions and topics, potentially impacting its performance accuracy in the Chinese language.
Similar Concerns Across AI Language Models
Experts argue that ChatGPT is not alone in its cautious approach towards China-related topics. Similar guidelines and practices are adopted by Google’s Bard, another large language model. While companies may aim to avoid promoting bias, concerns remain about human auditing and potential conservative approaches to sensitive topics. Chinese engineers and product managers play significant roles in the development and testing of both OpenAI and Google’s AI models. Consequently, complete unbiased responses in these models, which constantly evolve and tune based on changing data, can be challenging.
OpenAI Yet to Respond
The Epoch Times reached out to OpenAI for comment on the allegations surrounding ChatGPT’s censorship of China-related topics. However, the company has not provided a response thus far. The impact of these accusations and concerns on the future use and popularity of ChatGPT, particularly in the Chinese market, remains to be seen. Competitors like Baidu, Inc., with their own AI chatbot, Ernie 4.0, could potentially gain an advantage if ChatGPT’s performance accuracy is affected due to restrictions or mistrust.
As developments unfold, it is essential for AI models like ChatGPT to address concerns about transparency and bias. The focus on responsible use and the inclusion of diverse perspectives can help ensure the accuracy and fairness of AI-generated content.