Doctors Lack Skills to Interpret and Act on AI in Clinical Practice: Study
Artificial intelligence (AI) systems are becoming increasingly common in various industries, including healthcare. However, a recent study reveals that doctors are hesitant to adopt AI tools due to a lack of skills in interpreting and acting according to these technologies. This poses a potential barrier to harnessing the full potential of AI in clinical practice.
Clinical decision support (CDS) algorithms are tools that can assist healthcare providers in making critical decisions related to the diagnosis and treatment of medical conditions. For instance, they can help determine the appropriate antibiotics to prescribe or whether a risky heart surgery is advisable. These algorithms employ various techniques, from basic regression-derived risk calculators to advanced machine learning and AI-based systems. They can predict outcomes such as the likelihood of a patient developing life-threatening sepsis or the most effective therapy for an individual with heart disease.
However, the success of these technologies heavily depends on physicians’ ability to interpret and act upon their risk predictions, which necessitates a distinct set of skills that many doctors currently lack. In a perspective article published in the New England Journal of Medicine, researchers emphasize the need for physicians to understand how AI systems operate and make decisions before integrating them into clinical practice.
While some CDS tools are already integrated into electronic medical record systems, healthcare providers often find the current software cumbersome and challenging to use. Doctors do not need to be experts in mathematics or computer science, but they do require a foundational understanding of algorithms in terms of probability and risk adjustment. Unfortunately, most physicians have not received formal training in these areas.
To address this knowledge gap, the authors propose that medical education and clinical training should explicitly cover probabilistic reasoning tailored to CDS algorithms. They suggest that probabilistic skills be taught early in medical schools, enabling physicians to critically evaluate and utilize CDS predictions in their decision-making process. Additionally, doctors should practice interpreting CDS predictions and learn to effectively communicate with patients about the role of AI in guiding their treatment plans.
By enhancing doctors’ skills in understanding and utilizing AI tools, hospitals and healthcare systems can maximize the impact of these technologies on patient care. However, it is essential to provide comprehensive training and resources to ensure that healthcare professionals are equipped to navigate the complexities of AI in clinical practice. Through these efforts, the integration of AI and human expertise can ultimately lead to improved medical outcomes and more informed decision-making.