New Study Shows ChatGPT’s Medical Advice is Hard to Detect from Human Responses
A recent study conducted by the NYU Tandon School of Engineering and Grossman School of Medicine has found that ChatGPT, an artificial intelligence chatbot, is almost indistinguishable from human healthcare providers when it comes to responding to medical queries. The research suggests that chatbots like ChatGPT have the potential to be valuable allies in patient-provider communication.
The study involved 392 participants, who were presented with a mix of responses from both ChatGPT and human healthcare providers. Surprisingly, the participants were found to have limited ability to differentiate between the chatbot and provider responses. On average, they correctly identified chatbot responses 65.5% of the time and provider responses 65.1% of the time. This accuracy rate remained consistent across different demographic categories of the respondents.
The study also examined the level of trust participants had in the chatbot’s responses. Overall, the participants mildly trusted the chatbot’s advice, giving it an average trust rating of 3.4 out of 5. However, the level of trust varied depending on the complexity of the health-related task. Logistical questions, such as scheduling appointments and insurance inquiries, received the highest trust rating with an average score of 3.94. Preventive care, including vaccines and cancer screenings, also received a relatively high trust rating with an average score of 3.52. On the other hand, diagnostic and treatment advice had the lowest trust ratings, scoring 2.90 and 2.89, respectively.
The findings of this study highlight the potential for chatbots like ChatGPT to assist in patient-provider communication, particularly in relation to administrative tasks and common chronic disease management. However, the researchers emphasize the need for further research on chatbots taking on more clinical roles. They caution healthcare providers to exercise critical judgment and remain cautious when relying on chatbot-generated advice due to the limitations and potential biases of AI models.
The study’s results have significant implications for the future of healthcare. Chatbots like ChatGPT could be valuable tools, allowing healthcare providers to focus on more complex patient care while offloading administrative tasks to AI-powered chatbots. However, it is crucial to strike a balance between the benefits of chatbot assistance and the potential risks associated with relying too heavily on AI in a clinical setting.
As chatbots continue to evolve and improve, it is important for researchers and healthcare professionals to continue studying patient-chatbot interactions. This will help ensure that chatbots can effectively support healthcare providers in delivering quality care to patients while maintaining patient trust and safety.