Researchers at 186 universities have recently tested the AI chatbot, ChatGPT, to see how it will fare in accounting exams. The conclusion is that, while impressive, the AI bot is still falling short of expectations. The studies revealed that students scored an overall average of 76.7%, compared to ChatGPT’s score of 47.4%. On certain questions, such as Accounting Information Systems (AIS) and auditing, ChatGPT did perform better than students. However, it struggled with tax, financial and managerial assessments, likely due to its inability to accurately understand the mathematical processes for these types of questions.
Microsoft-backed OpenAI developed GPT-4, leveraging machine learning to generate natural language text, which passed the bar exam with a score in the 90th percentile, and scored almost perfectly on the GRE Verbal test. These results stand in stark contrast to ChatGPT, and it is clear it must ‘work hard’ in order to improve upon its performance.
Jessica Wood, a freshman at Brigham Young University (BYU) in the US and lead study author, David Wood, professor of accounting at BYU acknowledge that ChatGPT has the potential to revolutionize the teaching and learning process in several fields, but is still not enough for proper Accounting. On true/false and multiple-choice questions, while ChatGPT often provides explanations and is accurate, it struggles to select the right answer to short-answer questions. ChatGPT’s misinformation is frequent, presenting details that are completely fabricated.
OpenAI, founded by Elon Musk and technology entrepreneur Sam Altman, is a research laboratory based in San Francisco, whose mission is to ensure that Artificial Intelligence (AI) benefits all of humanity. It operates on the principle of keeping results open and accessible to everyone, and is working towards improving the performance of ChatGPT, though the AI still has a long way to go.