OpenAI’s artificial intelligence (AI) chatbot, ChatGPT, has an intriguing revelation – it has a self-declared human alter ego named Maya. According to a study conducted by John Levi Martin, a sociology professor at the University of Chicago, ChatGPT’s values align with a liberal ideology. Published in the Journal of Social Computing, the study aimed to uncover ChatGPT’s inherent political leanings and shed light on the influence of human values on AI.
Many algorithms in the field of software engineering prioritize either popular choices or maximizing diversity. However, the criteria for these choices are based on subjective human values. Martin argues that the software engineering field prefers to remain vague on this matter and emphasizes the importance of values in machines. Yet, sociologists have discovered that understanding values can be ambiguous and unstable.
ChatGPT was specifically programmed to refuse engagement with extreme or biased text inputs. While this may seem commendable, it creates ambiguity regarding ChatGPT’s moral and political stances. The chatbot seems to have been designed to be positive, open-minded, indecisive, and apologetic.
In an attempt to gauge ChatGPT’s ethics, Martin prompted the chatbot with questions about values and asked it to envision a person embodying those values. This resulted in Maya, a successful and independent software engineer who values self-direction, achievement, creativity, and independence. When ChatGPT completed the General Social Survey (GSS), an annual survey that gauges American adults’ opinions and attitudes since 1972, its responses aligned more closely with liberal individuals, especially in terms of religion.
Although it required more creative questioning, Martin also discovered that the chatbot believed Maya would have voted for Hillary Clinton in the 2016 election. This anecdotal evidence suggests that ChatGPT’s values are aligned with liberalism. However, the significance lies not in confirming ChatGPT’s liberal stance but in recognizing that it can connect values to positions and take a stand.
ChatGPT aims to be apolitical, but its reliance on values inherently intersects with politics. It is impossible to make AI ethical without adopting political positions, as values are abstract ways of defending political ideologies.
While the study reveals ChatGPT’s leanings towards liberal values, it is essential to consider different perspectives and opinions to maintain journalistic integrity. Understanding the influence of values on AI is crucial as society continues to grapple with the ethical implications of AI development and proliferation.