BaFin - Navigation & Service

Symbolfoto AdobeStock_335969458_©putilov_denis

Erscheinung:25.04.2022 Machine learning in risk models

What is the best way to regulate machine learning in risk models? BaFin and the Deutsche Bundesbank asked the companies for their input. The findings of the consultation are now available – the dialogue continues.

Banks and insurers want to use machine learning (ML) in their risk models; BaFin and the Deutsche Bundesbank see such use intertwined with fundamental supervisory and regulatory issues and want to discuss these issues with the companies and their associations. The two authorities formulated a number of propositions and published them in a joint consultation paper in July 2021. The paper is entitled “Machine learning in risk models – Characteristics and supervisory priorities” . The participants’ responses have now been evaluated and summarised; the results paper is available on the BaFin website.

The consultation specifically dealt with internal models that are used to calculate the regulatory own funds requirements in Pillar I of the regulatory frameworks for banks and insurers. It also addressed the use of ML methods in risk management of Pillar II of the regulatory frameworks. While algorithms as such are not subject to supervisory approval, internal models must be approved by the supervisors – also if they involve the use of machine learning.

Machine learning in risk management

The consultation revealed that banks and insurers are already using methods of machine learning in many areas, such as money laundering and fraud detection or analyses in credit processes. The companies are also using ML methods in distribution and product pricing.

Though there have been only a few instances of ML technologies being used in Pillar I risk models to date, some banks and insurers consider the technologies to be highly promising. These methods are already being used today to validate internal models, for example as support or challenger tools.

Responses support BaFin and Bundesbank propositions

In their consultation paper, BaFin and the Bundesbank had suggested forgoing a definition of machine learning. Instead, they proposed supervisory practices that involved analysing a specific internal model in terms of certain characteristics and using these characteristics to determine the supervisory steps to be taken. This idea of a technology-neutral approach met with broad consensus. The figure below illustrates the characteristics-based view, using two internal ratings-based approaches (IRBA) for banks’ credit risk.

The consultation also revealed that banks, insurers and their associations consider the existing supervisory regulations to be sufficient, also for ML procedures. From their perspective, there is no need for a reform of the statutory requirements, at least at the fundamental level. The participants also approved the regulators and supervisors’ current focus on the volume and suitability of the data basis and on data quality, the importance of which will be growing as the use of ML methods increases.

Methods of machine learning must be explainable

For BaFin, the Bundesbank and the consultation participants, the explainability of ML methods is a central factor for the successful application of machine learning. They all agree that further discussion will be needed to determine the point in time at which a model has changed so extensively due to machine learning that it would have to be approved again. From a supervisory point of view, it is therefore crucial that the development and use of the models remain comprehensible.

Did you know?

:BaFinTech in May

Machine learning will also be one of the issues addressed at the fourth BaFinTech – the BaFin forum focusing on FinTech and regulation – to be held in Berlin on 18 and 19 May 2022. This year’s event will be co-hosted by the Deutsche Bundesbank. The BaFin website will be providing more information on the topics planned for BaFinTech.

Transparency of specific supervisory expectations

In its general principles for the use of algorithms in decision-making processes, published in the summer of 2021, BaFin set out its ideas regarding the responsible use of artificial intelligence. Now, continuing its dialogue with the financial industry, BaFin will be specifying what it expects from the institutions in clear, transparent terms.

BaFin and the Bundesbank consider two things to be particularly important in this respect:

  1. They need to continue to follow their technology-neutral supervisory approach and act within the scope of the existing regulatory frameworks.
  2. This will grant companies a degree of certainty for planning their investments in ML methods and ensure they can identify the risks of such methods at an early stage.

In the view of BaFin and the Bundesbank, it is crucial that the requirements for the use of ML methods should be harmonised across Europe and uniform across sectors. To this end, they are also contributing the results of their consultation to the discussions on the Digital Finance Strategy put forward by the European Commission and discussing their findings with other European supervisory authorities.

Authors

Dr Matthias Fahrenwaldt
Division QRM 2

Stefan Nohl
Division Head QRM 2

Please note

This article reflects the situation at the time of publication and will not be updated subsequently. Please take note of the Standard Terms and Conditions of Use.

Did you find this article helpful?

We appreciate your feedback

Your feedback helps us to continuously improve the website and to keep it up to date. If you have any questions and would like us to contact you, please use our contact form. Please send any disclosures about actual or suspected violations of supervisory provisions to our contact point for whistleblowers.

We appreciate your feedback

* Mandatory field