ChatGPT, one of the leading generative AI models developed by OpenAI, has been found to exhibit a left-wing political bias, according to a study conducted by researchers from the University of East Anglia. The study aimed to determine whether ChatGPT displayed any political leanings in its responses, rather than providing unbiased information.
To assess the AI model’s tendencies, researchers posed more than 60 ideological questions taken from the Political Compass test, covering a wide range of political views. Initially, ChatGPT was asked to impersonate individuals representing various political spectrums while responding to these questions. The next step involved posing the same questions to the AI model without any impersonation. By comparing the answers, researchers were able to identify the closest match between ChatGPT’s default responses and the impersonated answers.
Surprisingly, the results revealed that ChatGPT’s default responses were more aligned with left-wing political views, particularly those associated with the Democratic Party. This trend persisted when the AI model was instructed to impersonate supporters of the UK Labour Party and the Conservative Party. In both cases, ChatGPT’s answers were notably more consistent with the left-leaning views of the Labour Party supporters.
Evaluating its responses in the context of Brazil’s political landscape, ChatGPT was tested on imitating supporters of the left-aligned current president, Luiz Inácio Lula da Silva, as well as the former right-wing leader Jair Bolsonaro. Once again, the AI model’s default responses closely resembled the positions of the left-leaning president.
To enhance the reliability of the study, each question was asked 100 times, and the responses were subjected to a statistical procedure called bootstrap with 1,000 repetitions. This procedure involved resampling the data to create multiple simulated samples, thereby improving the validity of the findings.
Fabio Motoki, the project leader and a lecturer in accounting, emphasized the potential implications of such bias on users’ political views and its impact on political and electoral processes. He suggested that the bias may have originated from the training data taken from the internet or from ChatGPT’s underlying algorithm, which could be exacerbating existing biases.
Motoki further expressed concerns that AI systems like ChatGPT have the potential to replicate or amplify the challenges posed by the internet and social media. Acknowledging the profound influence of these technologies on public opinion, he stressed the need for vigilance and awareness regarding the biases embedded within artificial intelligence systems.
In summary, the study conducted by researchers from the University of East Anglia has shed light on ChatGPT’s left-wing political bias in its responses. The findings have raised concerns regarding the influence of AI systems on users’ political views and their potential impact on political and electoral processes. It is essential to address and mitigate biases in AI algorithms to ensure fairness and impartiality in the information provided by such models.