The rapid adoption of AI technologies in healthcare has been a major topic of conversation in recent years. Exciting progress has been made in advancing the understanding and use of these technologies and their potential to improve the delivery of healthcare services. Numerous applications have been approved by the FDA, hundreds of algorithm-specific AI models have been created for fields like radiology and cardiology, and innovative studies have been launched to explore the use of AI for personalized health solutions.
However, there is a huge red flag when it comes to the use of AI in healthcare: the lack of set standards and high standards of data needed to successfully train these models. When AI fails in the diagnosis process, medical institutions and physicians could face legal issues. This situation is even more concerning with unregulated bot models made available to the general public.
For instance, ChatGPT is a language model developed by OpenAI, capable of producing responses with a human-like level of accuracy. However, a recent study showed that while the AI model could answer basic cardiology-related questions 90% of the time, this number decreased dramatically when the questions became more complex, showing a drop to 50%. In addition, there’s the risk of AI “hallucinating”, that is, providing wrong answers which could put patient safety at risk. That is why a patient cannot simply choose to use ChatGPT instead of a real doctor. As experienced practitioners, doctors bring to the table not only a wealth of knowledge, but also clinical judgment, empathy, and nuance that are essential for a personalized doctor’s visit.
AI is also being incorporated by health insurance companies to review health insurance claims in bulk, potentially saving time and money, however it is feared that this process could come at the cost of poorer personalized healthcare solutions.
Ultimately, while AI can be used to streamline processes, automating certain manual work, it must not be seen as a replacement for the human brain, which is still better equipped to assess patient needs and make sound judgements based on nuanced experience. AI can be used to help us doctors, not replace us, and this must be taken into account when implementing any new AI-driven system in healthcare.