Academic-type content produced by the ChatGPT machine is now detectable through technologies, a new study has found. Researchers from Plymouth Marjon University and the University of Plymouth in the UK led the study and noted that the results should serve as a wake-up call to university staff, who should re-consider strategies around explaining and minimising academic dishonesty.
The Large Language Machine (LLM) ChatGPT is as a potential revolutionary for research and education, although it has raised questions about academic integrity and plagiarism. This study directed ChatGPT to create content in an academic-style by providing several questions to address.
Afterwards, the text was rearranged and populated with real references which then made up the journal which published the study; “Innovation in Education and Teaching International”. The AI-text produced was found to be formulaic and in some cases detected by current AI-detection technologies.
Debby Cotton, a professor at Plymouth Marjon University and the study’s lead author, discussed the implications of this technology in the education world. She noted that while it may pose difficult challenges to universities when presenting the need to test student knowledge and teaching writing skills, it is an opportunity to rethink what universities teach and why.
Peter Cotton, an associate professor at University of Plymouth and correspondent author of the study, highlighted the role technology giants have in the widespread access to AI and emphasised further on the need for universities to adapt to the norm of using AI. Microsoft and Google are both immersed in using AI for search engines and Office suites.
This research looks to demonstrate that cheating must be addressed amongst universities, while at the same time, learning to adapt to a future where AI is normalised in our education system.