Project AISE: AI-enabled supervisory analytics for retail payments supervision



15 April 2026

Project AISE (pronounced "ace") stands for AI Supervisory Enablement. It is a technical proof of concept exploring how central banks and supervisory authorities could use artificial intelligence (AI) responsibly in supervisory analytics. Grounded in retail payments supervision use cases, the project uses synthetic data throughout and is designed to produce practical findings and reusable lessons for future experimentation.

Supervisory work is increasingly shaped by digitally native business models, complex third-party dependency structures and large volumes of structured and unstructured material. This creates an opportunity to develop better ways to organise, interpret and document analysis at scale – and better foundations for doing so responsibly.

Project AISE takes a comparative approach. By testing alternative analytical architectures under identical conditions – including machine-readable reporting, knowledge graphs, deterministic analytics and constrained model-based synthesis – it aims to identify which foundations genuinely add value, and which complexity can be avoided. The intent is to produce a stronger basis for institutional decision-making than vendor materials or abstract benchmarks can offer, along with reusable methods, evaluation patterns and sequencing guidance that authorities can adapt to their own constraints.

Governance and security are built into the design. Supervisory AI introduces real risks: unsupported outputs, prompt injection, retrieval manipulation, weak reproducibility and unclear accountability. The project tests whether useful AI-assisted capability can be developed within a constrained architecture that keeps outputs anchored in evidence, maintains provenance and supports structured human review. Its emphasis on locally deployable approaches is intended to make the findings relevant to authorities operating under strong confidentiality and control requirements.

The project's aim is a practical, evidence-based contribution to AI adoption in supervision – documenting what appears to work, what remains difficult and what governance foundations are likely to be needed before real-data experimentation can be conducted carefully and well.