ChatGPT, an artificial intelligence language model, has been causing waves in academics due to its ability to generate essays that pass rubrics with ease. Dr. Geiger, an assistant professor in communication and data science at UC San Diego, shares his experience with using ChatGPT in his classes and compares it to how educators once viewed Wikipedia. Despite initial fears surrounding users’ ability to edit Wikipedia, the platform has become a valuable resource for gaining an understanding of how knowledge is represented and contested. Similarly, Dr. Geiger required his students to engage critically with ChatGPT, training them to evaluate the AI’s limitations and capabilities.
Dr. Geiger warns against blindly trusting AI, stating that you cannot trust an AI to do work that you cannot independently verify or to check its own work. While he notes that there are different ways to incorporate AI into the writing process, he cautions against outsourcing one’s intellectual agency to a flawed, biased, and opaque system.
Furthermore, Dr. Geiger raises broader concerns around generative AI, such as OpenAI’s lack of transparency, the issue of using copyrighted work to train ChatGPT, and the impact of inadvertently disrupting educators. He emphasizes the importance of transparency and understands the difficulty of going behind the scenes with generative AI to uncover its biases and limitations.
In conclusion, Dr. Geiger’s experience with using ChatGPT in his classes underscores the need to approach AI with critical thinking and awareness of its limitations. While AI can serve as a tool to aid in the writing process, it is important to maintain intellectual agency and understand its potential impact on society.