High-level summary: BCBS SIG industry workshop on the governance and oversight of artificial intelligence and machine learning in financial services

Tokyo, 3 October 2019

On 3 October 2019, the Basel Committee on Banking Supervision's (BCBS) Supervision and Implementation Group (SIG) held a workshop on the use of artificial intelligence (AI) and machine learning (ML) in the banking sector, chaired by Mr Arthur Yuen, Deputy Chief Executive of the Hong Kong Monetary Authority. The SIG met with senior representatives from internationally active banks and associated industry associations, consulting and technology firms and other public authorities to discuss how banks are leveraging AI and ML technologies to better manage risk or offer new and innovative financial services, and the associated challenges related to risk governance, data management and engagement of third party service providers. The SIG also discussed how bank supervisors and other official sector bodies are approaching the risks arising from the adoption of financial technology.

The workshop follows discussions held with industry in February 2018 on how fintech developments are influencing bank business models and how banks are responding to the opportunities and challenges associated with fintech. A key observation made since February 2018 is that in the last 12 to 18 months, the use of AI and ML in risk assessments and analytics has increased across a range of functions, including credit analytics, fraud and anti-money laundering (AML) detection, and regulatory compliance. Nevertheless, ongoing challenges discussed at the workshop include "black box" risk and explainability of ML models, bias, ethics, transparency and data privacy requirements, and data management.

There was a recognition that while AI and ML models have characteristics that are common to other financial and regulatory models used by banks, AI and ML models may amplify traditional model risks. One such subset of risks highlighted by participants are those related to data, including the quantity and quality of vast data sets, data access and engagement with third parties that use or store data. With regard to unique risks posted by AI and ML, participants discussed how best to deal with ethics and bias, given that these issues have broader scope than that which is within the remit of prudential authorities. In this context, while existing governance frameworks and policies are generally considered to apply to AI and ML models, there may need to be further refinements.

There was a discussion on the role of model risk management functions in overseeing AI and ML validation, and the challenges with having the right skill sets and expertise to address risks specific to AI and ML. Supervisors also provided commentary in how they are considering  the supervision and oversight of AI and ML models.

The SIG agreed that ongoing supervisory engagement with stakeholders and information sharing were beneficial and future outreach will be considered.