According to a recent study conducted by researchers at Purdue University, it has been found that a significant portion of programming answers provided by ChatGPT, a popular chatbot used by computer programmers, are incorrect. The study revealed that a staggering 52 percent of the answers generated by ChatGPT contain misinformation.
In recent years, programmers have been increasingly turning to AI-powered tools like ChatGPT to assist them in coding tasks, leading to a decline in the reliance on platforms like Stack Overflow. However, the study’s findings raise concerns about the accuracy and reliability of AI-generated answers.
The researchers analyzed 517 programming questions from Stack Overflow and assessed ChatGPT’s responses to them. They discovered that a majority of ChatGPT’s answers were not only incorrect but also more verbose and inconsistent compared to human-generated answers.
Despite the high error rate in ChatGPT’s responses, the study revealed that many human programmers still prefer the chatbot’s answers. The researchers conducted interviews with 12 programmers and found that 35 percent of them favored ChatGPT’s answers, even though 39 percent failed to identify the mistakes in the AI-generated responses.
One possible explanation for the preference for ChatGPT answers is the chatbot’s use of polite language, formal and analytical answers, and overall comprehensiveness. These factors may contribute to the perceived credibility of ChatGPT’s responses, causing users to overlook the misinformation present in the answers.
The study emphasizes the need for caution when relying on AI platforms like ChatGPT for programming assistance, as the high error rate could lead to serious implications for the accuracy and reliability of the code produced. Despite its flaws, many programmers continue to use ChatGPT, highlighting the challenges in balancing convenience with accuracy in the rapidly evolving field of AI technology.