Artificial Intelligence (AI) is poised to revolutionize healthcare by transforming the way we diagnose and treat illnesses and diseases. Already, AI tools have shown promising advances in early-stage cancer detection, medication dosing accuracy, and robotic assistance in surgeries. In fact, AI systems have even outperformed experienced doctors in some medical contexts.
Recent studies have highlighted the potential of AI as a healthcare tool. Researchers from Boston’s Mass General Bridgham found that ChatGPT, an AI language model, correctly diagnosed more than 70% of patients in case studies used to train medical students. Additionally, a machine learning model successfully identified the severity of illness among older adults in ICUs across more than 170 hospitals.
These advances hold great promise for improving outcomes for millions of Americans and alleviating the burden on healthcare professionals who are overwhelmed with overwork and burnout. However, as AI plays an increasingly integral role in our health systems, it is crucial to ensure that it serves all Americans and does not perpetuate existing inequities or discrimination.
Biases and inequities already exist in American healthcare systems, and AI technologies have the potential to reflect and magnify these disparities. For example, a study from 2019 found that a healthcare risk-prediction algorithm used by major insurers systematically underestimated the health risks of Black patients.
Addressing bias in AI algorithms is vital. Efforts to mitigate or reduce bias must be undertaken by researchers, developers, and users to ensure AI tools deliver on their promise for everyone. This starts with ensuring the right data is used in algorithm development. Developers should ensure that the data is representative and includes key demographic elements such as race, gender, socioeconomic background, and disability. It is also essential to identify and address data gaps to inform the model’s limitations.
Representation should extend beyond the data itself. Development teams should include individuals with diverse perspectives and backgrounds to incorporate cultural sensitivity, ethics, and accountability into decision-making processes. Bias evaluation should not be a one-time exercise but should occur both during development and as the systems are implemented in clinical settings. It is also crucial to be open to reassessing initial assumptions and retraining algorithms to achieve the intended output, recognizing that there may be tradeoffs between accuracy and fairness.
Public trust and comfort with AI in healthcare are essential. A recent survey showed that just under half of US adults familiar with AI technologies would be comfortable with AI making a medical diagnosis for them. Even fewer individuals are open to the idea of AI performing surgery. To change this perception and unlock AI’s lifesaving potential, transparency is key. This includes publishing datasets and source code, as well as disclosing to patients the role of AI in their evaluation and treatment.
Efforts to establish standards, best practices, and regulations for AI in healthcare are crucial. Collaboration between lawmakers, industry experts, and academics is necessary to understand the power and potential of AI while safeguarding consumers. The government’s role in shaping AI regulation is critical.
AI has the potential to make healthcare more equitable when used correctly. By training AI on diverse patient data, we can gain a better understanding of how treatments and interventions work for broader patient populations. The race toward better healthcare outcomes is already underway, and collaboration is key to ensuring that we all cross the finish line together.