AI in Healthcare Could Worsen Ethnic and Income Inequalities, Warn Scientists
Scientists are raising concerns about the potential exacerbation of ethnic and income inequalities in healthcare resulting from the use of artificial intelligence (AI) models. Epidemiologists from the University of Cambridge and the University of Leicester warn that large language models (LLMs) could further entrench inequities for ethnic minorities and lower-income countries.
One of the main concerns stems from systemic data biases. AI models used in healthcare are typically trained on information extracted from websites and scientific literature. However, evidence shows that ethnicity data is often missing from these sources. Consequently, AI tools can be less accurate when it comes to underrepresented groups, leading to ineffective drug recommendations and even racist medical advice.
The researchers emphasize that a differential risk is often associated with being from an ethnic minority background across many disease groups. They argue that if the published literature already contains biases and lacks precision, future AI models are likely to perpetuate and potentially worsen these biases.
In addition, the scientists express concerns about the impact on low- and middle-income countries (LMICs). AI models are predominantly developed in wealthier nations, which also dominate funding for medical research. As a result, LMICs are vastly underrepresented in healthcare training data, which can result in AI tools offering inaccurate advice to individuals in these countries.
Despite these concerns, the researchers acknowledge the potential benefits AI can bring to the field of medicine. To mitigate the risks, they propose several measures. First, they advocate for clear descriptions of the data used to develop AI models. Furthermore, they call for further efforts to address health inequalities in scientific research, including improved recruitment methods and the better recording of ethnicity information.
To promote fair and inclusive healthcare, the researchers argue that training data should adequately represent diverse populations, and more research is needed on the use of AI for marginalized groups.
Dr Mohammad Ali from the University of Leicester emphasizes the importance of caution while acknowledging that progress should not be hindered in this field.
While AI holds promising potential for transforming healthcare, it is crucial to address the inherent biases and limitations in the technology. By taking proactive measures, such as promoting diverse representation in training data and conducting further research on underrepresented groups, the healthcare industry can harness AI’s power appropriately and work towards reducing, rather than exacerbating, existing inequalities.