A recent study revealed a pro-western cultural bias in the explanations provided by artificial intelligence (AI) systems. As AI technology becomes more prevalent in decision-making processes, the need for understandable AI outputs has led to the development of explainable AI (XAI) systems.
XAI systems aim to offer simple and transparent explanations for the decisions made by complex AI models. These explanations not only help AI engineers monitor and improve their models but also assist users in making informed decisions about how to trust and utilize AI outputs.
In high-stakes domains, such as hiring choices and medical diagnoses, the demand for XAI is growing. The European AI Act, which guarantees a right to explanation, highlights the importance of providing understandable AI outputs to citizens.
However, the study found that many existing XAI systems are tailored towards individualist, typically western, populations. Most XAI user studies primarily sampled western populations, with limited consideration for cultural variations in explanation preferences.
Cultural background plays a significant role in determining the type of explanations individuals prefer. Preferences for internalist explanations (focused on beliefs and desires) are common in individualistic societies, while externalist explanations (based on external factors) are more prevalent in collectivist cultures.
Despite the importance of cultural differences in explanation preferences, the study revealed that XAI developers often overlook this factor in their research. The lack of sensitivity to cultural variations could lead to a lack of trust in AI systems, especially among non-western populations.
To address this cultural bias in XAI, the study suggests closer collaboration between developers and psychologists to test for relevant cultural differences. Additionally, researchers should report the cultural backgrounds of study samples and ensure that their findings are not overgeneralized.
In conclusion, it is essential for XAI systems to provide explanations that are acceptable to people from different cultural backgrounds. By incorporating cultural diversity into XAI research, developers can enhance the inclusivity and trustworthiness of AI systems on a global scale.