BaFin - Navigation & Service

BDAI

BDAI/ML method performance can prompt users to blindly trust the data

The data on which BDAI/ML methods are based should be viewed as a starting point and a success factor. Unstructured data can now be exploited by and for BDAI/ML methods. In addition, BDAI/ML methods allow for calculations that factor in a large number of determinants, which in turn may lead to overfitting. When large volumes of data are used, the quality of this data must be continuously ensured. This not only applies to model development and validation but also applies to model application. (See BaFin/Bundesbank, Consultation 11/2021 – Consultation paper: Machine learning in risk models – Characteristics and supervisory priorities)

Focus on explainability

As the complexity and the number of dimensions of a model’s hypothesis space increase, it becomes more difficult to describe the functional relationship between input and output either verbally or with mathematical formulas. The calculations are then more difficult to understand for those modelling, using, validating and supervising the model in question. It can also be more complicated to check the validity of the model’s output. User acceptance may suffer, too. Although such a “black box” characteristic may be justified, e.g. if this results in higher predictive performance, it can lead to potentially greater model risk. Explainable AI (XAI) methods have been developed to address this risk appropriately. But even if XAI methods seem highly promising from a supervisory perspective as a means to mitigate the impact of this “black box” characteristic, these XAI methods are also models with assumptions and weaknesses, and, in many cases, they are still being tested. (See BaFin/Bundesbank, Consultation 11/2021 – Consultation paper: Machine learning in risk models – Characteristics and supervisory priorities)

Adaptivity: model changes are more difficult to identify

In the banking sector, institutions are required to report changes to Pillar 1 models to supervisory authorities, and, in some cases, they may implement these changes only after they have been approved. There is no clear-cut distinction between regular model maintenance and model changes, especially since the meaning of the term “model change” also depends on the relevant supervisory context. The flexibility and, in some cases, high-frequency adaptivity of BDAI/ML processes make it more difficult to draw a clear line between adjustments and changes; such a clear distinction, however, is indispensable for supervisory purposes.
As a general rule, the need for high-frequency adaptivity should be thoroughly justified. From a supervisory perspective, it is crucial to adapt the training cycle to the specific use case and to provide the necessary justification in order to achieve a balance between ensuring that the data is up-to-date and ensuring that models can be explained and validated.
(See BaFin/Bundesbank, Consultation 11/2021 – Consultation paper: Machine learning in risk models – Characteristics and supervisory priorities)