The BoE and FCA have published a joint report on Machine Learning in UK financial services and the results of an industry survey carried out in 2019. The stated aims of the review were for the BoE and FCA to better understand the current use of ML in financial services. This in turn allows the regulators to better consider the implications of this transformative technology in developing any related policy, balancing its benefits to business and consumers against the potential risks to consumers and the financial system as a whole.
The regulators appreciate that the new data-driven economy is driving dramatic changes both to the financial markets themselves and the way in which they supervise those markets. They consider that ML’s ability to analyse and interpret big data sets held by financial firms is a principal catalyst of this change and note that increasing volumes of data have accelerated the pace of ML development. The report also acknowledges the steps that firms have taken to integrate AI into their business models (such as using ML in back office processes, moving ML from the initial development phase to business lines, and designing ML in-house). Read our AI toolkit for more on how best to roll out AI.
106 firms contributed to the survey from a pool of nearly 300 banks, credit brokers, e-money institutions, financial market infrastructure firms, investment managers, insurers, non-bank lenders and principal trading firms, answering questions on the nature of ML deployment, the business areas where it is used and the maturity of applications with some specific use cases.
The regulators considered that the responses, whilst not statistically representative of the entire UK financial system, did provide some “interesting insights”.
The take-up of machine learning is increasing
- Two thirds of respondents said they already use ML in some form across a range of business areas.
- One third of ML applications are used for a considerable share of activities in a specific business area. Deployment is most advanced in the banking and insurance sectors.
- ML is most commonly used in AML and fraud detection as well as in customer-facing applications (e.g. customer services and marketing). Some firms also use ML in areas such as credit risk management, trade pricing and execution, as well as general insurance pricing and underwriting.
- Firms mostly design and develop ML applications in-house. However, they sometimes rely on third party providers for the underlying platforms and infrastructure, such as cloud computing.
Firms hope for more regulatory guidance
- Regulation is not seen as an unjustified barrier to ML adoption but some firms stressed the need for additional guidance on how to interpret current regulation.
- The most comment issues cited were around model risk management and the need to adapt process and system to cover ML based models.
- Some firms noted the challenge of explaining decision-making when using black box ML models to meet regulatory requirements.
- Several firms thought that regulatory guidance on best practice around ML use would be helpful and could promote greater deployment.
- Additional guidance could also potentially help firms design controls, model risk management frameworks and policies for ML applications.
Risk management is key
- The BoE/FCA are looking into whether ML adds degrees of complexity, as this could affect a firm’s risk profile and the BoE/FCA’s supervisory approach. Respondents stated that where ML was provided by a third party, it became difficult to assess any added complexity.
- The majority of users apply their existing model risk management framework to ML applications. But many say that these frameworks might have to evolve in line with increasing maturity and sophistication of ML techniques. This was also highlighted in the BoE’s response to the Future of Finance report.
- Firms thought that ML does not necessarily create new risks but that it could be an amplifier of existing ones. The most common safeguards used are alert systems and so-called ‘human-in-the-loop’ mechanisms. This is consistent with the EU ethics guidelines for trustworthy AI which stress the need for a human-centric approach to risk management of AI.
- The deployment of ML can also reduce risks. For example, it has the potential to reduce human bias, help identify market abuse and improve fraud detection and AML processes.