Álvaro Santos Pereira: Artificial intelligence and financial stability - navigating risks and opportunities
Opening remarks by Mr Álvaro Santos Pereira, Governor of the Banco de Portugal, at the conference "Artificial intelligence and financial stability - navigating risks and opportunities", Lisbon, 27 October 2025.
The views expressed in this speech are those of the speaker and not the view of the BIS.
Good morning everyone, - whether you are here with us in Lisbon or joining us online.
It is with great pleasure that we welcome you to the 2025 Banco de Portugal Financial Stability Conference on "Artificial Intelligence and Financial Stability: Navigating Risks and Opportunities".
I am honoured to address this distinguished audience and extend a warm welcome to our esteemed speakers. Today we will draw on your knowledge to engage in a discussion on issues that are transforming and shaping the future of our economies and the financial system.
We live in a world defined by rapid and profound transformation - technological, economical and geopolitical. In such an environment, central banks and financial authorities cannot afford to simply react. We must anticipate changes, understand its implications and proactively plan and prepare to what lies ahead. Only doing so can we ensure our ability to fulfil our missions and core mandate.
Today's conference focuses on Artificial Intelligence and cybersecurity.
AI is what economists and economic historians call a General Purpose Technology: a technology or set of technologies so profound and so important that it has the potential to profoundly change the way we work, the way we organize production, and the way we act we each other.
General Purpose Technologies don't arrive often. In fact, since the late 18th century, or the advent of the Industrial Revolution, we only had a few of this type of pervasive technologies, including: the steam engine, electricity, the computer, the internet and now AI.
In this past half century or so, we are thus witnessing the diffusion of the third General Purpose Technology, which comes on the shoulders of a technological giants called computer and the internet. And the diffusion of AI will have huge repercussions to our daily lives and to our economies.
According to the OECD, in the next decade, the diffusion of AI will contribute from 0.4 to 1.3 percentage points to annual labor productivity growth in G7 economies. AI will also have a significant impact to Total factor productivity. AI has thus the potential to revert the productivity slowdown that has plagued advanced economies in the past 30 years or so.
And, in these estimations, we are only talking about the next decade. And, more importantly, we are not assuming Artificial General Intelligence or the arrival of Quantum Computing, both of which will soon make their presence felt and will have much larger economic, social and societal impacts.
In other words, AI is a revolution in the making and our lives and our economies will change dramatically with its diffusion.
Obviously, the financial system will not be immune to the AI revolution.
In fact, already both the adoption of artificial intelligence and the rising frequency of cyber incidents are transforming the financial landscape at an unprecedented pace. The diffusion of artificial intelligence has already reshaped the financial sector: on the supply side, we've witnessed remarkable breakthroughs in deep learning and large language models, and, on the demand side, there´s been a relentless pursuit of efficiency, productivity and competitiveness.
It is well known that the financial sector has historically been an enthusiastic adopter of new technologies. And several estimations show that the exposure of the financial sector to AI is one of highest of all sectors in our economies. The diffusion of AI in the financial sector is also happening at an incredible speed. As supervisors, our role is to ensure that the opportunities this technological evolution brings are seized, but also in a prudent manner.
There's no doubt that Artificial intelligence allows financial institutions to increase their operational efficiency and enhance fraud detection. Advanced artificial intelligence models can also improve risk management practices, particularly in credit scoring and risk analysis. For central banks and financial authorities, artificial intelligence also opens promising avenues through so-called Sup-Tech tools. These tools allow supervisors to harness vast and granular datasets, enhancing monitoring, analysis and policy design. Forecasting and policymaking will also change dramatically with the advent of these very large and very detailed micro datasets.
Yet, one of the central dilemmas of AI lies precisely in the very power and efficiency that make it so transformative – its ability to act faster, absorb more information, and operate at far higher bandwidth compared with humans. These very strengths can create ripple effects that magnify every feedback loop that already exists in our complex financial systems. Consequently, future crises may unfold with greater speed and intensity, potentially evolving in real time. We need to be prepared for this.
Naturally, besides the aforementioned exciting opportunities, the integration of artificial intelligence might also pose some risks to financial stability. Allow me to focus on three key ones.
When financial market participants possess limited private information, or when several AI tools share the same algorithm, their actions can become synchronised. This herd behaviour can rapidly affect asset prices, create volatility, and amplify market cycles. Whilst each individual's behaviour may be considered rational or justified, it can still create systemic risks. Policy must therefore explore how to manage destabilising dynamics without unduly interfering with the autonomy of institutions and sweeping technological innovation.
The increasing complexity of advanced machine learning models, including large language models, can also lead to opacity. This poses challenges for both internal risk management and external supervision. Transparency and clearness are vital for assessing reliability, ensuring fairness, and enforcing accountability.
Developing and training frontier artificial intelligence models is computationally costly and requires deep technical expertise. Model development is currently highly concentrated among a small number of companies or external providers. The reliance on a few external artificial intelligence services and cloud infrastructures introduces systemic concentration and third-party dependency risk across the financial sector. A failure in one external provider can propagate rapidly across the entire system.
We thus need to prepare adequately to deal with these challenges. And we also need to invest significantly in these technologies, so that we benefit from these important innovations and are ready to face any eventuality brought by the new and emerging digital technologies.
Finally, I would like to now delve more deeply into cybersecurity - a paramount issue for financial stability.
A major cyber incident has the potential to quickly ripple through financial markets, affecting liquidity, eroding confidence, and weakening core market infrastructures. In the European Union, the financial sector remains among the top 5 targets for cybercriminals, and the threat landscape is evolving. Geopolitical tensions are reflected in state-sponsored cyber incidents and AI has boosted cyber threats. The European Union Agency for Cybersecurity (ENISA) estimates that artificial intelligence-supported phishing campaigns account for more than 80% of social engineering incidents worldwide. Finally, we are now observing a shift from isolated high impact incidents towards continuous campaigns that erode resilience.
This represents a new form of significant risk – one that the traditional macroprudential framework was not designed to handle. We must therefore adapt our approach to this new fast-moving reality. One example is the cyber resilience scenario testing recently developed by Banco de Portugal. This test drew on a related ECB exercise and the ESRB framework, merging individual bank supervision and macroprudential concerns. This proved to be useful for both participating financial institutions and supervisors.
A key factor to build resilience within the financial system is collaboration and information sharing among all stakeholders. Although collaboration with other supervisory authorities, both domestically and internationally, is already well established, the Digital Operational Resilience Act (DORA) marks an important step forward. However, while DORA provides guidance on several crucial aspects - including the individual management of ICT-related incidents -, it has not yet fully incorporated the macroprudential perspective that systemic resilience requires.
Whilst artificial intelligence has the potential to exacerbate cyber risks it can also be used as tool to mitigate cyber security concerns. Central banks and financial institutions can enhance their own resilience by harnessing the power of artificial intelligence. As an example, consider artificial intelligence's ability to handle vast data volumes. This is helpful for improving threat detection and accelerating incident response. Advanced machine learning frameworks can also support anomaly detection in payment systems.
Once again, we need to invest in these technologies, so that central banks and supervisors stay ahead of the curve, and make sure that the benefits of AI to the financial system vastly outweigh the risks posed by the swift diffusion of these path breaking technologies.
I'll conclude with one final thought.
Governance must place human accountability and expertise at the core. Artificial intelligence systems should augment but not replace human decision-making. They can act as "copilots", rather than operating as fully autonomous agents in critical roles. Robust governance frameworks are therefore necessary across the entire artificial intelligence value chain.
Supervisory capabilities must be continuously strengthened and adapted. As technological adoption accelerates, supervisory authorities must keep pace to remain effective. This will require sustained investment in both digital infrastructure and human capital. Attracting and retaining talent in these highly specialised fields is, indeed, one of the key challenges that central banks face today.
Today's conference is designed to address the immense risks and opportunities that artificial intelligence bring to the financial system. The integrity and stability of our financial infrastructure depends on how well we navigate this new technology-driven era. Achieving this demands commitment, cross-border collaboration, and a willingness to adapt our models and institutions.
Thank you once again for joining us at Banco de Portugal Financial Stability Conference. I hope today's discussions provide valuable insights and inspiration for the work that lies ahead.
Thank you.