Recent research has shown that AI-driven ChatGPT can be just as proficient, if not more beneficial, than actual medical professionals in responding to patients’ concerns. A team of researchers, including individuals from the University of California, San Diego; John Hopkins; and other sites, tested 195 healthcare queries with Open AI’s ChatGPT, and contrasted the compassion and the grade of the chatbot’s responses with the answers of real doctors, posted on Reddit. The responses were measured by healthcare experts, practitioners in internal medicine, paediatrics, oncology, and other related branches, on a five-point scale to determine the “quality of information” and “personability” of the answers.
The outcomes of the study determine that physicians preferred the chatbot’s response to the medic’s in 78.6% of the 585 “scenarios”. The chatbot’s answer was marked 3.6 times greater in class, additionally 9.8 times higher in empathy than that of the doctor. The chatbot yielded much more advanced answers, with greater personalization, than its human counterpart’s, which were generally short and succinct. For example, when the “bleach eye” inquiry was put forth to both the chatbot and the doctor, the chatbot responded, “I’m sorry to hear that you got bleach splashed in your eye”, and was more inclusive with explicit instructions on how to treat the issue. Meanwhile, the doctor only had the answer of “That sounds like you will be fine” as well as a concise guidance to flush the eye and contact poison control.
Although ChatGPT showed promising responses, it is still not by any means a diagnostic. OpenAI’s ChatGPT is a great tool that could potentially supplement medical advice in the near future.