Newsletter on artificial intelligence and machine learning

This version

BCBS  | 
Newsletters
 | 
16 March 2022
 | 
Status:  Current
Topics: Fintech

The Committee issues this newsletter to provide greater detail on its internal discussions regarding artificial intelligence and machine learning. The Committee believes the information provided may be useful for both supervisors and banks in their day-to-day activities. This document is for informational purposes only and does not constitute new supervisory guidance or expectations.

  • Banks are increasingly exploring opportunities for using artificial intelligence (AI), including machine learning (ML). 

  • Banks' use of AI/ML presents significant opportunities but can also heighten certain risks and challenges. 

  • The Committee intends to continue exploring banks' use of AI/ML, especially in the areas of explainability, governance, and resilience and financial stability.

Banks are increasingly exploring opportunities for using AI/ML. AI/ML technology is expected to increase banks' operational efficiency and also facilitate improvements in risk management. While significant opportunities are emerging from the increasing use of AI/ML in many areas of banking, there are also risks and challenges associated with these techniques. Banks are still in the process of developing best practices for risk management. Given the increasing adoption of this technology as well as the potential risks, the Committee is analysing banks' use of AI/ML and potential implications for bank supervision.

In its discussions to date, the Committee has identified several areas for continued analysis by supervisors. In some cases, AI/ML models may be more difficult to manage than traditional models as they can be more complex. Generally, banks are seeking to maintain a level of transparency in model design, operation, and interpretability of model outcomes commensurate with the risk of the banking activity being supported. Similar challenges exist when AI/ML model development is outsourced as banks still maintain the responsibility and accountability for appropriate due diligence and oversight. As AI/ML deployment often involves the use of large data sets, interconnectivity with third parties, and the use of cloud technologies, it can also create multiple possible points of cyber risk. In addition, given the volume and complexity of data sources commonly used to support AI/ML models, they may present greater data governance challenges in ensuring data quality, relevance, security and confidentiality. Furthermore, AI/ML models (as with traditional models) can reflect biases and inaccuracies in the data they are trained on, and potentially result in unethical outcomes if not properly managed.

Given the challenges associated with AI/ML, both supervisors and banks are assessing existing risk management and governance practices to determine whether roles and responsibilities for identifying and managing risks remain sufficient. As with other complex operations and technologies, it is important that banks have appropriately skilled staff, which can include model developers, model validators, model users and independent auditors. 

Building on the discussions on the supervisory implications of the use of AI/ML so far, the Committee is working to develop further insights on this topic. Continuing discussions will focus on three areas: 

  • First, the extent and degree to which the outcomes of models can be understood and explained.
  • Second, AI/ML model governance structures, including responsibilities and accountability for AI/ML-driven decisions.
  • Third, the potential implications of broader usage of AI/ML models for the resilience of individual banks and more broadly, for financial stability. 

The Committee believes that the rapid evolution and use of AI/ML by banks warrant more discussions on the supervisory implications, which will be facilitated by continued sharing of experiences among supervisors, industry and subject matter experts.