Managing the risks of machine learning (ML) in financial services is crucial to ensure the safety and reliability of ML applications. The Bank of England (BoE) and the Financial Conduct Authority (FCA) recently conducted a survey on the use of ML in UK financial services, highlighting some key findings and risks associated with ML in the industry. In this article, we will discuss the risk management approaches available to financial institutions (FIs) to mitigate these risks.
Effective ML risk mitigation requires robust governance frameworks, including model risk management and data-quality validation. It is crucial to have thorough assessments and reviews of ML models at every stage, from development to deployment. Clear lines of accountability should be established to ensure oversight of autonomous decisions, paralleling Article 22 under the General Data Protection Regulation (GDPR) and Chapter 2 of the EU’s proposed AI Act. These regulations aim to prevent solely automated decisions and protect people’s fundamental rights.
Model validation is a critical aspect of reducing risks associated with ML applications. It helps ensure the accuracy and reliability of models. Validation techniques should be applied throughout the entire ML development lifecycle, including the pre-deployment phase where models are trained and tested, as well as the post-deployment phase where they are live in the business. By continuously monitoring and assessing the performance of ML applications, potential risks and issues can be identified and addressed promptly. Common validation methods used by FIs include outcome monitoring, testing against benchmarks, data quality validation, and black box testing techniques.
As ML applications in the financial services industry evolve, they are equipped to quickly identify and adapt to new behaviors using live training data. These behaviors may include consumer spending patterns, fraud scams, and money laundering typologies. Therefore, it is essential for FIs to establish a robust model validation framework that monitors and prevents unfair treatment or discrimination.
Monitoring is another important aspect of managing ML risks. Around 42% of respondents in the survey reported using some form of monitoring, but specific safeguards were not specified. Common controls used include alert systems, human-in-the-loop systems, and back-up systems. Alert systems flag unusual actions for investigation and corrective actions by employees. Human-in-the-loop systems require human review or approval of ML decisions, providing an additional layer of oversight. Back-up systems can replace the ML application in the event of failures or errors to minimize negative impacts.
The use of ML in financial institutions has significantly increased in recent years and is expected to continue or accelerate further. With this increase in ML applications comes higher associated risks related to data, models, and governance frameworks. To mitigate these risks, prioritizing data quality, proper model validation, and implementing strong governance frameworks with appropriate safeguards is crucial.
While ML applications offer clear benefits for FIs, the transition from deterministic models to ML-based models, whose behavior is harder to understand and explain, introduces new risks and may invite regulatory scrutiny in the future. By implementing effective risk management approaches, FIs can harness the power of ML while ensuring the safety and reliability of their applications.