ChatGPT is a new artificial intelligence (AI) tool designed to help provide medical information and guidance. While it has been touted as having a better bedside manner than some doctors, it still lacks the expertise and experience of a human physician. A recent study found ChatGPT to be preferred over physician responses and rated significantly higher for quality and empathy. While physicians may have some things to learn from the chatbot when it comes to patient communication, ChatGPT is still not able to fully replace a human doctor and it says so itself.
CNN wrote this article discussing the findings of the research mentioned above. The study evaluated responses to around 200 different medical questions posed to a public online forum by ChatGPT and a pool of physicians. ChatGPT replied with more critical, longer replies that were largely found to be more compassionate and insightful than the responses from its physician counterpart. The tool was also praised for its ability to customize replies to different literacy levels. It’s an ideal tool for those seeking medical information and guidance, however, ChatGPT is still not to be fully trusted, as it cannot provide medical diagnoses, treatments, or advice.
Dr David Asch, a Professor of Medicine at the University of Pennsylvania, ran the Penn Medicine Centre for Health Care Innovation for 10 years and is someone who had the opportunity to ask ChatGPT how it could be useful in healthcare. While the responses were thorough, they were also quite verbose. In the end, Dr Asch believes that ChatGPT should be used as a support tool for doctors rather than a guide for patients.
In conclusion, while ChatGPT is proving to be an efficient artificial intelligence tool for obtaining medical information, it is still not able to replace a human doctor, and should be treated as an additional source of medical guidance or advice. Even with its comprehensive and empathetic responses, it still cannot provide medical advice and should not be fully trusted.