Denis Beau: The financial supervisor in the age of AI

Speech by Mr Denis Beau, First Deputy Governor of the Bank of France, at the Association Europe Finances Régulations Seminar ʺArtificial intelligence: a game changer for financial supervision?ʺ, Paris, 5 June 2024. 

The views expressed in this speech are those of the speaker and not the view of the BIS.

Central bank speech  | 
01 July 2024

Ladies and Gentlemen,

It gives me great pleasure to open this seminar on artificial intelligence for financial supervision.

The financial industry is data-driven and its analysis is essential, whether for assessing credit risk, estimating the volatility of an asset or the cost of an insurance risk, to name but a few examples. It is therefore only natural that AI technologies, which aim to harness the wealth of data available to financial institutions, should be deployed in this sector. Artificial intelligence is already one of the main drivers of the current digital transformation of the financial sector, and the development of GenAI should further accelerate this trend.

From a supervisor's point of view, the impact of artificial intelligence warrants particular vigilance as it is potentially ambivalent: AI is a source of opportunities for the sector, including for its supervisor, but it is also a new vector of risk. This ambivalent impact partly explains the regulatory framework that has just been introduced. In my remarks this morning to introduce your conference, from a supervisor's point of view, I will therefore discuss the opportunities, risks and conditions for an effective regulation of AI for the financial sector.

1/ AI offers many opportunities for the financial sector... and also for supervisors.

Our observations over the last few years confirm that AI is being used increasingly by financial institutions in all segments of the value chain. They deploy it in particular to improve the 'user experience' (for example, chatbots for customer support), as well as to automate and optimise a number of internal processes. AI is also used to monitor and mitigate risk, as illustrated by its success in use cases relating to the fight against fraud, money laundering and the financing of terrorism (AML/CFT). Properly harnessed, AI is therefore likely to increase the efficiency of financial institutions and contribute to their profitability - which is a key factor in their soundness - including by offering risk control solutions.

The arrival of generative AI (GenAI) marks a new phase in this innovation process. The first thing that comes to mind is the qualitative improvement in existing tools: to give a simple example, whereas ''traditional'' assistance chatbots use natural language processing to provide general explanations, chatbots powered by GenAI are able to provide a personalised response that is more tailored to the user's situation. GenAI therefore has the potential to accelerate the adoption of new technologies, and hence the pace of innovation and the transformation of processes: for example, the ability to make computer queries in natural language, and generate code on command, will raise the question of the monopoly of 'professional' programmers. More generally, GenAI could boost productivity: the AI Commission, chaired by Anne Bouverot and Philippe Aghion, estimated the additional growth that the deployment of AI could generate in France by 2034 at around 1% per year.

Supervisors obviously have no intention of remaining on the sidelines of these major transformations, and are already making use of AI technologies to improve their efficiency when performing their duties. Other projects already developed at the ACPR include the early detection of anomalies in institutions' reporting, or our LUCIA tool, which analyses large volumes of banking transactions and enables us to assess the performance of the AML/CFT models deployed in banks.

Very recently, with the help of our innovation centre, Le Lab, the ACPR organised a 'Suptech Tech Sprint', a hackathon designed to explore what GenAI can bring to the various supervisory activities. The three-day event revealed the potential of large language models (LLMs) for supervision, with 8 prototypes jointly developed by external data scientists and ACPR staff. 4 projects will be further developed as part of our strategic plan and, I hope, new tools will be created to help the Supervisor in many of its activities. This Tech Sprint also enabled us to lay the groundwork for a longer-term review of the way in which we want to develop supervisory activities: in particular, part of analysis will always have to be carried out by human supervisors, as our main challenge is to maintain a very high degree of reliability in our processes.

2/ Because the use of AI is not risk-free, and this is my second point: AI can, in fact, increase risks, not only for individual institutions but also for the financial sector as a whole.

Firstly, at the microprudential level, i.e. for each individual institution, the use of AI can generate risks for the soundness of the institution and its customers. A poorly calibrated pricing model can generate systematic losses, and therefore jeopardise the long-term viability of an institution. For customers, the use of AI entails the risk of an inappropriate or even discriminatory treatment and risks to privacy when personal data is processed, as well as risks of misinformation, or even manipulation, in customer relations. 

In this respect, the lack of transparency of algorithmic decisions is a major concern for supervisors with regard to AI. Naturally, this raises concerns for customer protection, because customers need to be able to understand the automated decisions made on their behalf. But it is also a governance issue: an institution that has a poor understanding of the decisions made by its AI systems cannot claim to control the risks entailed.

If we now broaden the focus, the mass adoption of AI may give rise to risks to the stability of the financial system as a whole, or what we call 'macroprudential' risks. In this respect, two main sets of risks can be identified: firstly, herd behaviour on the financial markets could be exacerbated by the use of the same types of tools, resulting in greater volatility and procyclicality. Secondly, the massive deployment of AI may result in systemic risks of dependence on third-party players if the financial sector gives massive priority to purchasing 'off-the-shelf' AI systems - for example in the field of GenAI, where the main players are today the same as those who dominate the cloud market.

However, we must remain cautious, as the future is not written. The importance of the systemic risk factors I have just mentioned - the same types of tools leading to herd behaviour, the same suppliers - will depend above all on a technology question: will generalist models gradually become the norm for all uses? If so, there is a very real risk that this will lead to what economists call a natural monopoly or oligopoly. This will not be the case if specialist models dominate.

Lastly, at the frontier between microprudential and macroprudential supervision, I would like to mention one last risk, and not the least important: cyber risk. In recent years, this has become the number one operational risk for the financial sector. And AI is a factor that amplifies this risk. First, because technology greatly increases the danger posed by attackers: AI code-writing assistants hijacked to design malicious software, synthetic voices facilitating identity theft, etc. The list of threats is long, even though technology can also be used to counter these attacks. Second, and more generally, the growing deployment of AI could further increase the 'technical interconnections' within the financial system, where technologies, systems and suppliers are intertwined in an increasingly complex set of interdependencies, making it easier to transfer a vulnerability from one system to another. This is one of the key reasons behind the European DORA regulation, which will come into force in January 2025.

This brings me to the regulatory framework for AI in the financial sector.

3/ Faced with the risks of AI, and to enable the financial sector to take full advantage of its opportunities, we need to build effective regulation.

The move towards a regulatory framework has already begun: with its AI Act, the European Union has adopted the world's first legal framework and laid the foundations for 'trustworthy AI'. To this end, the regulation distinguishes between several levels of risk, within which 'high risks' - which form the core of the text - will apply to the financial sector in at least two respects: customer worthiness assessment when granting credit to individuals; and the assessment and pricing of health and life insurance. 

Although the text focuses above all on the fundamental rights of citizens, the financial supervisor must also take account of other objectives: financial stability, AML/CFT, etc. The European legislator has clearly seen the need to link the cross-sectoral objectives of the text with the specific objectives of financial regulation: it has therefore entrusted the role of 'market surveillance authority to the supervisory authorities of the financial sector for financial use cases. This is a wise choice, as it will allow the best possible coordination of the implementation of sectoral and cross-sectoral legislation, with the help - when the time comes - of guidelines from the European supervisory authorities. In any event, on the basis of the work it has been carrying out on AI for several years now, the ACPR is ready to exercise the new role that the AI Act is set to confer on it.

However, we do not intend to perform this role in isolation: on the contrary, I believe we need to develop effective coordination to provide a framework for the use of AI. First, at the European level, where I would like to see the rapid creation of a common methodology for auditing AI systems in the financial sector in order to reduce microprudential risks. As regards the macroprudential aspects, I believe that one solution would be to encourage the emergence of European suppliers of AI solutions, in order to diversify the tools and therefore the risks.  

That said, we will have to go beyond the European level because, by its very nature, the regulation of AI is a global issue. I note, moreover, that many jurisdictions are expressing concerns similar to ours, which underlines the value of the many international initiatives (FSB, OECD, UN, etc.), which must now be brought together. We also need to go further by developing cooperation between sectoral authorities, because AI-related issues are largely interconnected: today you have heard me talk about competition and data protection issues. One of the aims of today's seminar, in my view, is to encourage an exchange of views and dialogue between all the stakeholders - and I hope that this will be fruitful.

Thank you for your attention.