Cambridge Study Reveals Failings of AI in University Essays

Date:

A recent study conducted by researchers at Cambridge University has shed light on the key features of essays generated by artificial intelligence (AI) technology. The study compared essays written by three university students assisted by ChatGPT to 164 essays from GCSE students.

According to the study, AI-written essays exhibit distinct characteristics such as repetition of words and phrases, the common use of introductory phrases like ‘however’, ‘overall’, or ‘moreover,’ as well as the inclusion of numbered lists and complex vocabulary.

Interestingly, the research found that essays generated with ChatGPT struggled in analysis and comparison skills compared to human-written essays. However, the AI-assisted essays performed well in terms of providing information and reflecting on the given topics.

This study has prompted university departments to review their policies on AI technology usage, especially in light of the upcoming exam season. Currently, Cambridge University prohibits the use of AI technology in assessed work due to academic misconduct concerns. However, the guidelines for non-assessed work vary across different departments.

In response to the study’s findings, students are advised to exercise caution when using AI tools for writing tasks. While grammar and spelling correction tools are allowed, those that actually write or edit content on behalf of students are considered inappropriate.

The debate on AI usage in academic settings continues, with different departments providing varying levels of guidance to students. Engineering students, for example, are allowed to use ChatGPT for structuring coursework as long as they disclose its usage and provide information on the prompts used.

See also  Girls Excel in goIT Artificial Intelligence Program, Inspiring Tech-for-Good Solutions in Hawaii

As universities navigate the evolving landscape of AI technology in education, the balance between leveraging its benefits and upholding academic integrity remains a central concern. The guidance provided to students reflects ongoing efforts to define the boundaries of acceptable AI usage while preserving the essence of learning and academic rigor.

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Advait Gupta
Advait Gupta
Advait is our expert writer and manager for the Artificial Intelligence category. His passion for AI research and its advancements drives him to deliver in-depth articles that explore the frontiers of this rapidly evolving field. Advait's articles delve into the latest breakthroughs, trends, and ethical considerations, keeping readers at the forefront of AI knowledge.

Share post:

Subscribe

Popular

More like this
Related

Disturbing Trend: AI Trains on Kids’ Photos Without Consent

Disturbing trend: AI giants training systems on kids' photos without consent raises privacy and safety concerns.

Warner Music Group Restricts AI Training Usage Without Permission

Warner Music Group asserts control over AI training usage, requiring explicit permission for content utilization. EU regulations spark industry debate.

Apple’s Phil Schiller Secures Board Seat at OpenAI

Apple's App Store Chief Phil Schiller secures a board seat at OpenAI, strengthening ties between the tech giants.

Apple Joins Microsoft as Non-Voting Observer on OpenAI Board, Rivalry Intensifies

Apple joins Microsoft as non-voting observer on OpenAI board, intensifying rivalry in AI sector. Exciting developments ahead!