Recently, Texas A&M University-Commerce professor Dr. Jared Mumm failed more than half of his class after using ChatGPT, a large language-model chatbot developed by OpenAI, to check for plagiarism. When ChatGPT falsely claimed it had written several of the papers, Mumm was unable to verify it, which led to several student failures.
The incident has since prompted an investigation by the university and the development of policies to address the use or misuse of AI technology in the classroom. Mumm has since apologized to some of the students and agreed to review their papers, and has said that he will no longer be using ChatGPT for plagiarism checks.
ChatGPT is a powerful language model developed by OpenAI that is trained on text and code sources. It can be used for creative writings, translations, and even answers questions. Despite its usefulness, it can also be used maliciously and can provide false information, which were the result of this incident.
The professor’s failure to verify ChatGPT’s claim and the resulting failure of several students highlights the importance of being aware when using large language models. They should never be used without proper verification and should not be used to discredit any original work.
Conclusively, it is important to be careful when using AI and take the necessary measures to protect original work from false claims. People should remember to always check the texts ChatGPT may generate and make sure they are original writings. This episode has demonstrated just how serious the consequences can be of not following this principle.