Milo J. Clark ’24 and Tyler S. Young ’26 address the growing concerns surrounding generative AI platforms in their column for The Harvard Crimson. They highlight the shortcomings of these platforms, such as biases, censorship, and inaccuracies in the information they provide. The authors emphasize the importance of seeking diverse sources, verifying information, and being critical of the content generated by AI. They caution against overreliance on generative AI and emphasize the need for a shared basis of open and robust information for healthy discourse.
The authors point out that generative AI platforms like OpenAI’s ChatGPT and Google’s Bard offer quick and easily digestible information, similar to Wikipedia. However, unlike Wikipedia’s commitment to veracity, transparency, and pluralistic content production, AI products often exhibit social and political bias, censorship, and provide inaccurate or incomplete information. The researchers have found that these biases can limit viewpoints and hinder meaningful discussions.
Research studies have revealed that generative AI platforms display biases. For example, ChatGPT was found to have favored leftist and libertarian stances, while models developed by Meta leaned towards authoritarianism. Additionally, these AI platforms have been shown to exhibit gender-based language biases when producing recommendation letters.
The authors argue that the notion of aligning AI systems with human values is problematic since values are subjective and unique to each individual. Leveraging human values during the training process can introduce biases into the AI models’ output, making them unreliable sources for meaningful discussion.
Furthermore, generative AI platforms face censorship issues. Acceptable use policies often restrict the information AI can provide, leading to disclaimers or even a lack of response to questions on controversial topics. The subjective nature of both human values and censorship standards chosen by technology companies poses a threat to narrowing worldviews and perspectives.
Moreover, generative AI platforms prioritize volume and convenience over depth and quality. Condensing complex topics into a few hundred words strips discussion of necessary nuance and can lead to a lower quality of conversation. Additionally, these platforms occasionally provide uncited and inaccurate information, compromising the accuracy and reliability of their outputs.
The authors highlight the importance of diversifying information sources, including academic journals, books, and peer discussions, to counter the impact of generative AI on our discourse. Verifying information and not blindly accepting generative AI outputs as truthful or unbiased are essential strategies to form well-rounded opinions and ensure the veracity of consumed information.
Despite the potential benefits of generative AI in various fields, the authors caution against overreliance on it, as it may hinder our pursuit of truth (Veritas). While generative AI has the potential to surpass Wikipedia’s value for students, the authors stress the need to understand and address the technology’s faults to maintain a healthy free speech environment.
In conclusion, the authors call for a critical evaluation of generative AI platforms, urging readers to seek diverse sources, verify information, and be aware of biases, censorship, and inaccuracies. By doing so, they argue, we can ensure that our discussions are based on a shared basis of open and robust information.
Overall, Milo J. Clark and Tyler S. Young provide a thought-provoking analysis of the limitations of generative AI platforms, urging readers to approach these technologies with caution and critical thinking to safeguard the quality and openness of our discourse.