A recent research study, published in the journal Cell Reports Physical Science, has developed a tool that can differentiate between AI-generated and human-authored academic science writing with 99% accuracy. With the rising influence of AI chatbots, such as ChatGPT, in producing text similar to human language, the researchers, led by Professor Heather Desaire from the University of Kansas, aimed to create a user-friendly and accessible means to detect AI-generated writing across different genres, requiring no computer science background.
The research team focused on a particular type of article, perspectives, which provide overviews of research topics. They chose 64 perspectives and generated 128 articles on similar subjects using ChatGPT. The team discovered that AI writing is predictable, in contrast to humans who create more complex paragraph structures with diverse sentence lengths and vocabulary usage.
The research team identified 20 distinct features that the model could use to detect AI-generated text, including certain preferences in punctuation and vocabulary usage. The model could successfully identify AI-generated full perspective articles 100% of the time and individual paragraphs within the articles with 92% accuracy, outperforming existing AI text detectors.
Overall, this study marks significant progress in detecting AI-generated content and upholding academic writing’s integrity. While the model developed by the research team can distinguish between AI and human scientists’ writing, it cannot identify AI-generated student essays. However, the methods used can be easily duplicated to fit varying requirements.
The researchers aim to expand the model’s applicability by testing it on larger datasets and broader types of academic science writing. With this newfound tool, scientists and educators can better navigate AI-generated text, comprehend its potential limitations and benefits, and maintain the integrity of academic writing.