Study Raises Concerns About Bias in ChatGPT and Its Relevance in Education and Policymaking
A recent study conducted by researchers at the University of East Anglia (UEA) has highlighted concerns regarding the potential bias displayed by OpenAI’s language model, ChatGPT. The study, titled More Human than Human: Measuring ChatGPT Political Bias, is believed to be the first comprehensive exploration into the political predisposition of ChatGPT.
ChatGPT is a popular chatbot developed by OpenAI that utilizes advanced language processing technology. However, allegations have been made that the chatbot has a tendency to favor left-wing viewpoints, reflecting the stances held by politicians from the Labor Party in the UK and the Democratic Party in the US. This has raised concerns about the applicability of ChatGPT in policymaking and education.
The UEA researchers noted a strong association between ChatGPT’s typical responses and the answers it provided when pretending to support left-leaning figures such as the Labor Party, the Democratic Party, and Brazil’s former president Lula da Silva. However, they observed a different connection when the chatbot simulated the views of right-leaning figures like the Conservative Party, the Republican Party, or Brazil’s former president Jair Bolsonaro.
Lead author of the study, Dr. Fabio Motoki from the Norwich Business School at UEA, emphasized the importance of impartiality in AI-powered systems like ChatGPT, particularly as they are increasingly used by the public to gather information and generate content. Dr. Motoki expressed the need for popular platforms like ChatGPT to provide outputs that are as unbiased as possible.
The UEA study follows recent investigations by American and Chinese researchers, which found ChatGPT to exhibit the most left-leaning tendencies among 14 different AI chatbots subjected to similar assessments of political bias.
OpenAI, the San Francisco-based company responsible for developing ChatGPT, has previously acknowledged the potential for political bias in the chatbot’s responses. Although the company has pledged to allow users to customize the behavior of the chatbot, these adjustments have not yet been implemented.
As various companies and governments worldwide compete to create extensive language models using similar technology to ChatGPT, the issue of bias is gaining increasing attention. It is crucial to address these concerns to ensure AI-powered systems like ChatGPT maintain impartiality and provide accurate information to their users.
The findings of the UEA study underscore the need for continuous research and development to mitigate bias in language models. As AI becomes more integrated into various aspects of society, it is imperative to create mechanisms that ensure fairness, transparency, and neutrality in the outputs generated by these technologies. Only then can AI-powered systems like ChatGPT truly fulfill their potential in education, policymaking, and other domains.