A new study has shown that Machine Learning (ML) algorithms embedded within online banking services might sometimes be unfair towards certain groups, resulting in biased decisions about consumers’ credit cards, car loans, and mortgages. One approach to addressing such bias is by eliminating sensitive attributes from the training data. However, there are instances where sensitive attributes can be indirectly represented in other attributes in the data.
To combat this problem, a new approach based on covariance analysis has been proposed to identify attributes that can mimic sensitive attributes. This innovative approach has been proven to positively impact the reduction of biases in ML models while maintaining their overall performance. An evaluation was conducted using two different datasets extracted from both traditional and online banking institutions.
This research is particularly important as it highlights the need for improved fairness in ML algorithms used in online banking services. It emphasizes the significance of identifying the attributes in data that encapsulate sensitive information because these attributes can impact decisions that could result in discrimination or unfair treatment. Therefore, the development of this novel approach provides a tool to reduce biases and improve fairness in the decision-making process.
In conclusion, this research sets out to rectify a crucial problem within the online banking industry and will enable more people to access credit, loans, and mortgages without worrying about their sensitive attributes such as gender, ethnicity, religion, and more impacting their applications. By removing biases from ML algorithms, we can create a more equitable future for everyone.