FinRegLab, a fintech research and development nonprofit organization, has published two papers that explore the potential of machine learning (ML) tools in creating fairer credit decisions for both consumers and small businesses. The research aims to understand the role of ML models in credit underwriting and their impact on accuracy, credit access, fairness, and regulatory compliance.
The first paper, titled Machine Learning Explainability & Fairness: Insights from Consumer Lending, builds upon previous empirical research conducted by FinRegLab in collaboration with Professors Laura Blattner and Jann Spiess from the Stanford Graduate School of Business. This paper provides updated findings and expands on the use of ML models in the credit lending process.
The second paper, Explainability & Fairness in Machine Learning for Credit Underwriting: Policy & Empirical Findings Overview, presents a summary of the key findings from the research project and their implications for regulation and public policy. It emphasizes the importance of rigorous research, thoughtful deployment, and proactive regulatory engagement to ensure that any new technology benefits borrowers and financial service providers alike.
The research highlights that ML models have the potential to increase accuracy and expand credit access, particularly when combined with new data sources. However, some ML models are considered black box models due to their complexity, making it difficult to understand how they operate. The papers reveal that certain explainability tools can offer valuable insights into ML models’ operations, and automated debiasing techniques may significantly improve fairness compared to traditional compliance approaches.
Melissa Koide, CEO of FinRegLab, emphasizes the need for appropriate human oversight and effective data science tools to harness the potential of machine learning models responsibly. She believes that, when combined with new data sources, these models can enhance credit access for millions of underserved consumers and small businesses.
The overview paper urges the need to update existing regulatory frameworks to accommodate the increasing use of machine learning models and fairness techniques. It proposes defining important qualities for explainability techniques and setting regulators’ expectations regarding lenders’ search for fairer alternative models. This early stage of evolution across various stakeholders, markets, circumstances, and technologies necessitates cooperation and understanding.
The empirical paper assesses model diagnostic tools by incorporating seven technology providers, including Arthur, H2O.ai, Fiddler, RelationalAI, SolasAI, Stratyfy, and Zest AI. These tools, along with open-source alternatives, were applied to a range of underwriting models tailored specifically for the study. The aim was to help lenders address transparency challenges and effectively manage machine learning models in compliance with the law.
The research conducted by FinRegLab forms part of a broader project on explainability and fairness in machine learning for credit underwriting, made possible with support from JPMorgan Chase and the Mastercard Center for Inclusive Growth. Further findings related to artificial intelligence’s implications for financial inclusion can be found on FinRegLab’s website.
Overall, FinRegLab’s research sheds light on the potential benefits and challenges of machine learning in credit underwriting. The findings underscore the importance of responsible deployment, regulatory adaptation, and stakeholder collaboration to ensure these models are utilized in a fair and transparent manner, ultimately benefiting borrowers and financial service providers.