ChatGPT, an AI chatbot that has become popular since November 2022, may be good at mimicking human workers in multiple fields but not in scientific research, says a new report. Researchers have created a computer program to spot fake studies generated by the chatbot and have discovered that the AI is still capable of fooling some humans with its science writing. However, a computer model has been created to detect ChatGPT-generated fake studies that are more than 99% of the time. ChatGPT-created papers differed from human text in various ways, including paragraph complexity, sentence-level diversity in length, punctuation marks, and popular words. As a result, computer programs to differentiate between real and AI-generated papers continue to be necessary since fewer humans can differentiate the differences. The researchers have warned that this is only a proof of concept and that much more wide-scale studies are needed to create robust models that are even more reliable. Such models can be trained to specific scientific disciplines to maintain the integrity of the scientific method.
AI Chatbot ChatGPT Struggles to Generate Convincing Scientific Papers
Date:
Frequently Asked Questions (FAQs) Related to the Above News
Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.