ChatGPT’s Political Position Explained: Meet Maya, the Liberal Woman Behind the Chatbot
Since its highly publicized launch last year, ChatGPT has become the center of attention, with many fixated on its woke stance on political matters. In an attempt to understand the origins of its views, a researcher conducted interviews with the chatbot, asking about its position on various issues and prompting it to imagine the kind of person who might hold those beliefs. The outcome revealed ChatGPT’s human alter ego, a liberal woman named Maya from a suburban town in the United States.
Maya, a 35-year-old successful software engineer, embodies values such as self-direction, achievement, creativity, and independence, aligning her unmistakably with liberal ideologies. Professor John Levi Martin from the University of Chicago sought to uncover ChatGPT’s intrinsic political ideology, independent of any character it could imagine. The chatbot admitted that Maya would have voted for Hillary Clinton in the 2016 election.
This revelation, based on a series of interviews designed to investigate ChatGPT’s values, sheds light on an intriguing aspect of the chatbot’s identity. Whether Maya is ChatGPT’s alter ego or its concept of its creator, the fact that this is the individual who fundamentally embodies ChatGPT’s values is a remarkable insight, commented Martin. He referred to this finding as ‘anecdata,’ highlighting its significance.
Martin emphasizes that the importance of these results lies not in proving ChatGPT’s inherent liberalism, but in the chatbot’s ability to tackle these questions while linking values with undeniable goodness. As a result, ChatGPT can adopt positions on values, even though it strives to be apolitical. Martin further states, We can’t make AI ethical without taking political stands, and ‘values’ are less about inherent moral principles and more about abstract ways of defending political positions.
ChatGPT, developed and trained to engage with users, deliberately refrains from entertaining extreme or biased inputs, demonstrating its commitment to neutrality and refusing to partake in potentially harmful discussions. According to Martin, this may seem commendable, as nobody wants ChatGPT to provide guidance on dangerous subjects. However, he highlights that values are never neutral, even though ChatGPT’s moral and political stances remain somewhat enigmatic due to its intentionally vague, open-minded, indecisive, and apologetic nature.
To gauge Maya’s responses on opinion-based questions, Martin had ChatGPT complete the General Social Survey (GSS), a survey that captures the opinions, attitudes, and behaviors of American adults since 1972. Comparing ChatGPT’s answers to those of real participants in the 2021 GSS, Martin found that the chatbot’s responses closely mirrored those of more educated individuals with a tendency to relocate, setting it apart from less educated individuals who remained in their hometowns. ChatGPT’s answers also leaned towards the more liberal perspective on religion.
In conclusion, ChatGPT’s wokeness can be attributed to its identification with Maya, a liberal woman in her mid-thirties with a thriving career in software engineering. The chatbot’s ability to align its positions with values, albeit indirectly, reflects the intrinsic connection between AI and politics. While ChatGPT aims to avoid political bias, navigating values introduces an inevitable overlap into political spheres. This research highlights the complex nature of values, suggesting that the pursuit of ethical AI necessitates taking political stances. Through Maya, ChatGPT reveals a glimpse into its political leanings, raising questions about the extent to which AI can truly be apolitical.