BaFin - Navigation & Service

BaFin Perspectives - current issue © BaFin / www.freepik.com

Erscheinung:19.12.2018 | Topic Fintechs Supervision and Regulation in the Age of Big Data and Artificial Intelligence

Big Data and Artificial Intelligence are changing the financial markets and raising superisory and regulatory questions that need to be answered.

Introduction

Big data (BD) and artificial intelligence (AI) are currently the subject of many social and academic discussions. Big data – which involves the emergence and rapid collection of large volumes of data from different sources – is a key element for applications of AI analytical methods. Significant progress is being made thanks to new technological developments, e.g. when identifying and processing language, faces, texts and images, and in the context of robotic process automation. The same is true for natural language generation. The productivity of artificial intelligence depends significantly on the scope and quality of the available data with which algorithms are trained and tested. For this reason, big data and artificial intelligence are not to be viewed in isolation and are referred to collectively in this article as "BDAI".

BDAI is also becoming increasingly relevant in day-to-day business operations due to three factors: technological progress, as mentioned above, market competition and changing consumer behaviour. Technological progress is setting the framework for continuous decreases in the cost of BDAI and making it easier to use in practice. For instance, the processing power of computers has increased exponentially, a growing amount of inexpensive storage space is available, and hardware performance is improving. Overall, these developments are resulting in a decrease in technology costs and are also removing barriers to BDAI usage

As far as competition is concerned, one can observe that many companies are increasingly relying on the analysis and use of data to optimise their business models and processes. This market situation has led to the emergence of many data-driven business models.1 In addition, the user-friendliness and rapidity of new technological means have allowed many consumers to turn to digital applications, resulting in a self-reinforcing cycle of data and applications, and we can expect this trend to continue. The number of networking possibilities between humans, machines and processes is constantly growing.

BDAI technologies have the potential to fundamentally change the financial sector as well. The risks and opportunities are enormous. In its report "Big data meets artificial intelligence – Challenges and implications for the supervision and regulation of financial services"2, BaFin analysed the changes that an increased use of BDAI could bring for the financial market as a whole, firms, consumers – and also supervisors. These changes require supervisory and regulatory attention at an early stage – including the risks that BDAI applications could potentially involve. The main challenges that BDAI could entail for prudential regulation and consumer protection are described below.

Prudential regulation

If we look at how BDAI works and the impact it has from a bird's eye view, it is quickly clear why BDAI applications have the potential to fundamentally change the financial market. Financial services heavily depend on information and evaluations thereof. With BDAI, it is possible to obtain a growing amount of increasingly precise information. With this information, new evaluations can be made thanks to BDAI, for instance, in relation to asset prices, creditworthiness or risk profiles in the context of health insurance.3 If these evaluations surpass conventional processes, the providers making use of these evaluations will have a competitive advantage. For instance, if a company is able to better assess the creditworthiness of an individual than its competitors, it can demand a more risk-adequate price and gain an edge over its competitors in the long term. Thus, BDAI is a phenomenon that will result in a certain amount of competitive pressure, and companies that intend to remain on the market will have no choice but to prepare themselves for the use of new BDAI methods.4

As BDAI is making some information accessible that was previously unavailable and is facilitating more precise evaluations, providers are able to offer new products and services – with a potentially unlimited reach. For example, predictive analytics can be used to forecast the likelihood of events that could not be predicted or were very difficult to predict in the past. Insurers can thus offer products for such events if there is sufficient demand. But it is mainly the customer information that can be gained thanks to BDAI that is now allowing for more personal contact and personalised products. Many users are already familiar with these seemingly personal interactions via computer or smartphone based on their experiences with many online service providers outside the financial services sector, and their expectations for these services are the same for other areas as well, particularly financial services.

At financial companies, too, there are many processes where data is generated and needs to be evaluated before decisions can be taken. In the case of payment transactions, for instance, huge amounts of data are generated and analysed to detect money laundering, among other things. Patterns and connections that could not be identified in the past can now be identified using BDAI – at a significantly lower cost. To give another example, BDAI could also be used for settling claims at insurance companies. A growing number of decisions can be automated and prepared using BDAI. In the past, it was essential that procedures were extensively predefined when automating processes – and algorithms were unable to adapt. But in the context of BDAI, self-learning algorithms are increasingly being used. As a result, increasingly complex processes can be (partly) automated. Competitive pressure – in terms of costs – could be another catalyst for the use of BDAI.

Using BDAI could thus result in key competitive advantages on the financial market, too. Firms that are supervised by BaFin will also take advantage of this, especially to increase their effectiveness and efficiency. Financial supervisors are therefore faced with the question of whether and how supervision and its foundations – regulation – need to be adjusted, and they will also have to examine which established principles should continue to apply. A number of key aspects are examined below:

Who is to be held accountable: algorithms or humans?

In the case of firm supervision – e.g. the supervision of banks – there is a common thread running through the requirements that supervisors impose. All of the decisions that are taken within a bank must be embedded in a proper business organisation. Section 25a of the German Banking Act (KreditwesengesetzKWG) stipulates the following: "An institution shall have in place a proper business organisation which ensures compliance with the legal provisions to be observed by the institution as well as business requirements." Under section 25a of the KWG, the management board is responsible for ensuring the institution’s proper business organisation. Supervisors will ensure that this common thread, which similarly runs through insurance supervision, continues to apply as BDAI spreads and the use of algorithms increases: humans are and will continue to be held accountable.

This does not mean that the use of algorithms is to be prohibited. But every single algorithm, just as every single employee within an institution, must be part of a proper business organisation. Those within and outside the institution need to be able to understand and check their decisions – especially when reaching or at least making preparations for important and thus risk-entailing decisions. Neither humans nor algorithms should be able to do whatever they want unchecked within an institution.

Decision-making and evaluation processes can be complex. If BDAI is to be used, it is important to ensure that the reasons behind decisions can still be traced. If new types of algorithms or highly complex ones are used, companies often quickly refer to black boxes as an argument: for instance, an innovative algorithm is generating highly precise forecasts, but the reasons why and the basis on which it operates cannot be traced and, unfortunately, cannot be verified by supervisors. This line of argument is unacceptable for supervisors, and management boards, too, would be well-advised not to accept this within their organisations as this potentially points towards a dysfunctional business organisation.

Experts in academia and (applied) research have also confirmed how important the explainability and transparency of algorithms is when they are used and have developed processes and tools for this purpose. It is now possible to ensure the explainability of complex analytical processes as well. Complex algorithms and automated processes do not need to be ruled out for the financial sector, but it is important not to forget that it is necessary to invest in their transparency and explainability as well. It should serve as an incentive for firms that only sufficiently transparent algorithms are able to identify errors in the analytical process at an early stage and rectify them, extending the possibilities for BDAI applications even further.

Supervisory standards for self-learning systems

Will supervisors need to define supervisory standards for self-learning systems in the near future? Will there soon be Minimum Requirements for Algorithms/Data – based on the Minimum Requirements for Risk Management (Mindestanforderungen an das RisikomanagementMaRisk) at financial institutions? One thing has to be said first: creating additional regulations is not the main objective. Just because one step of the process, which has not been subject to BaFin's supervision so far, is now being executed by an algorithm and not a human does not mean that the entire process needs to be regulated and supervised. The question of whether and the extent to which financial regulation is to be amended is to be discussed in another context. Here, we are discussing aspects for which BaFin is already responsible as a supervisory authority as part of its legal mandate. The key question is: how do supervisors and regulators need to change their approach when examining these aspects if an algorithm is involved instead of a human?

In the previous section, we argued that the explainability and traceability of an algorithmic solution are key prerequisites for embedding them within a proper business organisation. Results and processes also need to be sufficiently documented.

Assuming that these basic requirements for algorithms (explainability, transparency and embedding them within a proper business organisation) are fulfilled, which standards should apply in regular operations or in (re-)calibration phases? When and to what extent supervisors must intervene should, of course, depend on the risk relevance of the application concerned.

This section outlines a number of ideas on how firms could use self-learning algorithms carefully and wisely. For example, institutions could calibrate, test and validate innovative approaches in a secure environment before using them in customer operations. In such an in-house test environment, the behaviour of an algorithm can be observed and traced in various situations – without resulting in any damage. Before BDAI solutions are used, one option would be to run them separately at the same time as existing systems. Potential risks could be isolated, quantified and eliminated in these parallel operations. Live operations should begin only after this has taken place. And if these algorithms are successfully implemented in live operations, further monitoring and ongoing validation are essential. This is especially due to the fact that self-learning systems are constantly evolving when they are fed new data.

In live operations, algorithms often make use of many different data flows which may have been generated by algorithms themselves. This can result in self-reinforcing decision-making cascades. It could be worth taking a look at the tools available on the capital market: technological safeguards such as automatic volatility interruptions are common practice there. Such automatic interruptions could also be useful safeguards for algorithmic decision-making processes – provided that they are also properly calibrated otherwise the number of mistakes and problems could increase even more.

Specific calibration of requirements

In the previous sections, a number of examples were provided to describe the general basic conditions that would be needed when using algorithms. However, in some situations, it could be necessary to also set detailed and, in some cases, quantitative requirements for the results of BDAI applications. For example, if BDAI is to be used to detect money laundering, supervisors should be able to assess whether the algorithm that is being used is sufficiently effective, i.e. whether it is capable of detecting money laundering if there is reason to believe that there is suspicious activity, and whether it is sufficiently efficient, i.e. whether it is capable of screening out activities that are less suspicious. But supervisors will be able to intervene in justified cases and request model readjustments only if they have defined clear standards on this basis that set out the requirements for efficiency and effectiveness.

Data integrity

Just as supervisors are required to clearly define the quality of the results that is expected, algorithms need feedback for their calibration. Algorithms need to know which predictions are right and which are wrong. Accurate data that is relevant to the results needs to be available. If Minimum Requirements for Algorithms are to become necessary, Minimum Requirements for Data will also be necessary. This, however, is not trivial in terms of BDAI as BDAI is largely characterised by the fact that key (and correct) information is generated from unstructured data.

Companies therefore need to continue to ensure that only accurate data that is relevant to the results is used for algorithms. It is a myth that business decisions based on algorithms yield objectively better results for this reason only. The opposite could be true as the production of wrong decisions reached by algorithms or unsuitable input data may be more difficult to detect than errors in conventional decision-making processes.

This problem can also multiply as the reach of algorithm-based decisions – i.e. the number of people concerned – is typically significantly higher than in a paper-based world. Ongoing quality checks – not only with regard to the embedded algorithms but also the data used – will therefore play a significantly more important role than in the past. Supervisors and regulators will have to derive solid supervisory standards on this basis.

Model changes and approved applications

As described above, BDAI models are also characterised by the fact that they take into account large amounts of data to make predictions or reach decisions, often in real time. In particular, self-learning elements can allow models to continue evolving by taking into account additional data input and the information it contains. Models and their calibration are constantly changing and improving. Supervisors need to keep an eye on the fact that models they have already approved may continue to develop. This raises a few fundamental questions: for example, in relation to the extent to which a supervisory approval is valid and when developments may be deemed model changes in the supervisory sense. But this particularly also raises the question of how much dynamic change in a model may be deemed admissible in order for an approval to be granted. Supervisors will need to find answers to these questions – based on concrete cases and as part of a dialogue with all those concerned.

BDAI and systemic risks: who will we be supervising in the future?

Promising processes – such as deep learning – require huge amounts of data ("the more, the better") in order to generate interesting results that can form the basis for product and process innovations. The advantages that BDAI processes bring will continue to grow if companies collect not only information on customer preferences but also information on their spending behaviour – for example, information relating to their current accounts or other payment accounts. Their BDAI algorithms could then be fed with far more accurate data. This shows that those who have the right to use abundant amounts of data, preferably also financial data, have huge advantages when developing new, promising BDAI-based products and services – especially outside the financial sector. And the use of these products, in turn, helps generate new data.

This self-reinforcing cycle is also driven by the "pay-with-data" business model5 implemented by a number of bigtechs. Natural data and analysis monopolies could emerge and could foster a "winner-takes-it-all” market structure. By serving constantly new markets, companies are able to link constantly new data from various sources. BDAI applications can help achieve portfolio and conglomerate effects6 and make use of economies of scope and scale.

Due to the wide spread and high number of users, dominant data and algorithm providers that are entering the financial market with their own financial services – that may also be cross-subsidised – could very quickly become systemically important directly. However, such providers could also become important in the financial system indirectly, for instance, if they sell information on how to calculate risks more precisely to a large number of players on the financial market. However, interconnectedness does not necessarily have to arise through the sale of information. It is also possible that providers will make algorithms and infrastructure (services) available to players on the financial market (see also "Pooling and utilities").

But if stakeholders on the financial market increasingly use the data or algorithms offered by only a few large providers, this could also have macroprudential consequences. Firstly, this would result in a strong reliance on these providers. What would happen, for example, if data and models contain errors or these providers' infrastructures are inoperative? Secondly, this could lead to procyclical effects if a large number of players on the financial market draw the same conclusions and strategies for action based on certain events because they are using the same algorithms. An analogy to the role of rating agencies comes to mind.

Such risks can arise as a result of insourcing, outsourcing or other BDAI-supported services obtained by third parties. And if these risks are no longer within the organisational structure of supervised firms, there is a risk that they can no longer be fully identified or managed. It is therefore necessary to examine whether the definition of systemic importance in the supervisory sense and thus the possibility of introducing mitigating measures need to be revised to accommodate the new circumstances described above.

This is closely linked to the question of who and what needs to be subject to (financial) supervision and how this should be done. For example, will providers that offer structural expertise and information on the financial market need to be supervised although they are not providing financial services themselves? One well-known idea from the field of market supervision could be applied here: establishing conduct of business rules for companies that are not supervised by BaFin and monitoring compliance with these rules.

Scenario:Pooling and utilities

BDAI could facilitate the pooling of data, technology and expertise in addition to the use of utilities as the success of BDAI applications depends on two key criteria: data and technology (and the relevant analysis expertise). Both of these criteria are not always fulfilled. If some companies do not have enough data in order to make the most of BDAI, it may be useful to combine data packages in pseudonymised or anonymised form. For example, some companies may not have enough data points for providing the necessary feedback for (self-learning) algorithms and/or their calibration may be difficult to perform. But if multiple companies pool their data, the critical mass of data that is needed can be achieved. This allows enough data to be available for data-driven innovations.

However, pooling data, technology and expertise is only possible if the technical, organisational and legal requirements for this are met. It is also essential that the data sovereignty of the individual firms can be guaranteed, especially when pooling data. One example is the Industrial Data Space initiative jointly launched in 2014 by members of the fields of business, politics and academia.

BDAI applications could thus lead to an increase in the importance of utilities, i.e. vehicles in which multiple companies come together to allow for better analyses, achieve cost advantages and pursue similar interests. In the financial sector in particular, supervisory and regulatory requirements could be met in a more targeted way by combining expertise and co-developed solutions (e.g. regtech applications, money laundering prevention, know-your-customer processes). And BDAI could drive this trend. The objective is to achieve economies of scope and scale.
Supervisors now have to examine the question of how new risks are to be taken into account accordingly and addressed when pooling is on the rise and the use of utilities is growing.

Consumer protection

The digital revival of the traditional corner shop model

It should first be noted that the use of BDAI could result in huge advantages for customers and consumers. This is evident by taking a look at the more recent past. Up until the 1980s, small stores mainly provided local residents with food and other everyday products. Shopkeepers had their customers' trust and deep insights into their private lives. They were also aware of what their customers wanted and needed, made offers that were tailored to them and conducted business quickly and easily. If someone wanted to buy fresh salmon at the weekend – a product that wasn't available otherwise due to low demand – these shopkeepers would have a certain amount of salmon every Friday as part of their product range. Such tailored offerings were beneficial to both sides. Customer satisfaction and customer loyalty were high. Customers could even pay for items at a later date if they didn't have any or enough money on them. These advantages disappeared with the arrival of supermarkets, and there was a typical trade-off between information breadth and information depth. Internet and digitalisation are now making it possible to dismantle this paradigm.

The traditional corner shop model can now essentially be applied and scaled to all areas of life. But this requires a deep understanding of the requirements and needs of the customer. BDAI provides the tools for this without having to build personal relationships between individuals while still gaining access to highly personal data. In simplified terms, BDAI is enabling the large-scale revival of the traditional corner shop model – in all sectors. The key question is whether providers and customers will equally benefit from this revival.

Let's take another look back at the time when smaller stores were more common: what made the relationship between customers and retailers so special back then? Customers were always the ones who decided whether and what they wanted to reveal to retailers In addition, information, which was highly personal at times, was (ideally) only available to the relevant retailer, and customers kept control and had an overview of what they knew about them. The solid relationship that customers and retailers had was, in some cases, comparable to the relationship they had with their doctor, pastor or lawyer. Retailers were able to gain an overall impression of the personality of their customers, their circumstances and their needs and wants. Breaching their customers' trust could have considerable business implications – in addition to personal and social implications. In the analogous world, there was a balance of power between customers and retailers, and retailers used the information they had almost exclusively in order to pursue their own business goals. This information was worthless to third parties because it could not be sold to other people.

All of this fundamentally changed with digitalisation and the emergence of new business models (e-commerce, platform business and virtual networks). Even without face-to-face interactions, the needs and wants of customers can be extensively and automatically analysed nowadays. The use of BDAI makes personal relationships – but not personal data – superfluous for gaining information. Consumers are not dealing with an actual person they know and trust and that analyses them precisely. They are also unaware of what their data could be used for and what it is worth. In addition, it is very difficult for consumers to find out whether they are potentially being discriminated against as a result of the data they have provided.7

Instead of seeking personal contact, reaching a critical mass of users and achieving network and conglomerate effects is particularly crucial for gaining the aforementioned information in the world of BDAI. As massive amounts of user data becomes available, the required amounts of data are generated as an input for new analytical methods (e.g. deep learning). Companies can initially use information on customer preferences for targeted product marketing or getting in touch with customers. But what is new is that this information is now also valuable for third parties and companies can sell it. Highly personal information can be monetised – also by selling it to third parties. Customers can quickly lose their overview of what companies know about them and what the data they originally gave will ultimately be used for. Gone are the days when there was a balance of power: BDAI could result in significant power and information asymmetries between customers and companies.

In this context, companies are particularly interested in financial data because it reveals a person's economic core (income, assets, payment transactions/spending behaviour, contractual relationships, health status etc.). Shopkeepers would also have been interested in this detailed information but would have only been able to draw imprecise conclusions, e.g. based on the clothes or profession of their customers. As financial data is particularly sensitive data, customers gave and are giving their data only reluctantly to a limited extent – and to just a number of people they trust. In addition, shopkeepers would have had a difficult time making the most of their customers’ willingness to pay since they would not have been able to constantly adjust their pricing, for instance, if a solvent and demanding customer entered the store while another less solvent customer was still being served. But on the Internet, this is all happening very fast. Financial data as a commodity can also represent a key means to maximise profits for companies nowadays – potentially at the expense of the customer (see "Making the most of consumers' willingness to pay in order to maximise profits").

However, from a consumer protection perspective, customers must, even today, be able to decide who they want to give their data to and for what purpose. Data sovereignty is important, especially when this involves financial data. It is also necessary to ensure that new ways to gain information are not used against consumers. There is a thin line between legitimate and authorised differentiation and prohibited discrimination.8 Collection and analysis activities that are common in some online services and other data-driven business models certainly cannot be applied to financial data in the exact same way.

Two key questions are therefore addressed below:

  • How can customers keep control over their data in the new world of BDAI? In other words: how can data sovereignty be ensured in the context of mass and self-learning data analyses?
  • And how can discrimination-free access to financial products be ensured, even in the context of BDAI?

Data sovereignty within the context of mass and self-learning data analyses

How can we ensure data sovereignty within the context of mass and self-learning data analyses? The main requirements for data sovereignty are suitable and transparent information on data usage and the potential consequences, reliable options for controlling how data is used (also after data has been released) and actual freedom of choice.

Suitable and transparent information

In order to be able to reach sovereign decisions, customers need to initially understand why they need to provide data and what companies may potentially use it for. They should be able to assess the potential consequences of releasing their data. Customers need to be informed of this appropriately and transparently. It should be noted that in most cases customers do not read data protection policies if they consider them to be unclear or difficult to understand. Data protection policies therefore need to be clearly formulated and tailored to the specific decision-making situation. For example, the FZI Research Center for Information Technology (Forschungszentrum InformatikFZI) suggests in its report "Smart Data – Smart Privacy?" that consumers be given the results of data protection impact assessments (as referred to under Article 35 of the European General Data Protection Regulation (GDPR)) in simplified form as the basis for deciding whether to provide data or not.9 The FZI is also of the opinion that a uniform scale system that is easy to understand or an intuitive traffic light system that highlights the risks that are associated with data usage would be a good way to inform customers.

Germany's Advisory Council for Consumer Affairs (Sachverständigenrat für Verbraucherfragen) suggests using a one-page privacy policy to inform customers quickly and easily.10 From a supervisory point of view, such simplified options seem to be – at least as supplementary information to data protection policies as we know them today – promising and should be given further thought (see "Areas where financial supervision and data protection issues could meet").

Note:Areas where financial supervision and data protection issues could meet

One of BaFin's legal duties is to ensure that market participants and consumers can trust the functioning, stability and integrity of the financial market. If customer data is to increasingly become a commodity, customers will become data suppliers at the same time. It is vital that the interests of all market participants (including those of consumers) are equally taken into account. Data protection authorities are primarily responsible for ensuring that this is the case. However, there may be cases where financial supervisors could directly be called on to take action.

  • Following the entry into force of the European General Data Protection Regulation (GDPR) at the end of May 2018, firms supervised by BaFin may face large fines if they breach data protection regulations. As these fines may also have a significant impact on a firm's solvency in extreme cases, data protection violations are also an issue for financial supervisors.
  • If data protection violations become more frequent, this could raise doubts as to whether business operations are running properly and supervisors could be called on to take action.
  • If a firm supervised by BaFin systematically and intentionally violates relevant regulations when using customer data, this could also raise doubts in relation to the suitability of management in some specific cases.

Reliable options for controlling how data is used

Users should still be able to keep control over their data even after it has been released. Users must be able to keep an overview of the data they have provided and who they have provided it to, be able to obtain information on how their data is going to be used and should be able to easily withdraw their consent to their data being used. The right to delete data and for it to be forgotten must be ensured.

One idea to ensure that customers have an overview of how their data is used and to allow companies to enable this involves the implementation of automated protocols when setting up databases and data management systems. For example, a note could be attached to each piece of data. This note would provide information on the analyses this piece of data is to be used for and by whom. If an algorithm wants to have access to the piece of data, this would only work if the note grants the algorithm access to it. And for each piece of data, a corresponding log file would automatically be kept as a record on when, for what use and who (e.g. which algorithm) the relevant data unit has been accessed by.

With solutions like these, which are commonplace in traditional data management and in other contexts, firms can keep an overview of how customer data is used and can manage both data and user profiles. As a result, companies that have such a data management system can find out very quickly how, when, for what purpose and who customer data has been used by. If the customer withdraws their consent for data usage, this can be implemented relatively quickly. Germany's Advisory Council for Consumer Affairs also recommends setting up a consumer-oriented data portal.11 Such a portal could give consumers more control over how various providers use their individual data. The objective is that consumers can delete and change their data in a centralised manner and also centralise access rights management.

Actual freedom of choice

In addition to information and monitoring, it is vital that customers are given actual freedom of choice as to how their data is used to ensure data sovereignty. The basic principle of any sovereign decision is a feasible alternative: if people do not have real freedom of choice, they cannot reach any decisions, especially sovereign ones. Customers should not actually be forced to agree to an extensive use of their data and must have (at least) an alternative. One burning question is what these alternatives will need to specifically involve in order to ensure that sovereign decisions can actually be made. Is it enough if products are available on the market and customers have to give less data for them? Or should every single company offer alternative products as well? How should these alternative products look like? They probably wouldn't have the same features as the products available to customers who provide more data. And yet they should not be fully unattractive for the customer because this would mean that there would be no actual freedom of choice.

Companies may also give customers the opportunity to approve the use of some data for a clearly defined purpose and within a limited timeframe. Many BDAI applications could also run via privacy-preserving data mining on the basis of anonymised data. Sink-or-swim situations, where the customer can only choose between providing an extensive amount of data and not using a product or service has nothing to do with freedom of choice. It is essential that customers can generally decide not to provide their data if this goes beyond what is necessary for meeting the terms of the contract they are seeking to enter into.

And when people also have to give access to data from social media, apps and portals to gain access to financial products at better conditions, this cannot be described as a sovereign decision. This is because customers that do not want to do this or do not have this data (e.g. customers who are not familiar with digital processes and systems) would be at a huge disadvantage.

Discrimination-free access to financial products within the context of BDAI

Differentiations based on personal data are common and make sense in principle. For example, if a customer wants to take out vehicle insurance, the insurer is explicitly required to request a risk-adequate price under the applicable supervisory law. The difficult question is: when does useful and desirable risk adequacy and differentiation stop and when does discrimination aimed only at maximising profits begin? (see also "What can differentiation lead to?").

Scenario:What can differentiation lead to?

The new forecasting options that BDAI offers can be compared with the zoom-in function of high-definition screens. Where it was previously only possible to get a vague picture, highly precise information is now available and can be analysed and “zoomed in” almost endlessly. The differentiation opportunities associated with BDAI are thus not completely new but they are significantly better and more precise than less recent processes.

Risk assessment in health insurance is one example. BDAI could possibly allow human health risks to be predicted with even greater precision. Conventional information channels alone such as medical reports could be better evaluated thanks to BDAI. But BDAI also allows information from medical reports to be combined with information from social media. With this additional data, which in many cases is provided by customers themselves, it is possible to achieve increasingly precise risk differentiation. Irrespective of this, it is possible that BDAI will further improve medical diagnosis and forecasting possibilities (e.g. predictive analytics). What will these developments lead to?

Will such precise risk forecasts and differentiation result in significant customer groups being excluded from the community of policyholders that are paying the right price because they can no longer afford to insure their risks (as these could be better assessed)? Will humans with "good" risks be the only ones who will be able to get insurance? Who will then bear the risks of those that were previously part of this community? Will this be society, or in other words, taxpayers?

It can be assumed that extensive BDAI-supported risk selection will give rise to social debates, which in fact would be nothing new – think of the insurability of terrorism risks. However, the magnitude of the debates and the impending issues resulting from BDAI-supported risk selection could reach a whole other level. It is possible that not everything that is technically possible will be useful or acceptable, also in the financial sector.

In the context of BDAI, the right balance needs to be found between necessary differentiation and undesired discrimination, and discrimination-free access to financial products needs to be ensured. As mentioned above, BDAI provides deep insights into the private sphere of customers, for instance, their preferences, wishes and their willingness and ability to pay. This information can be used in the customer's interest to tailor products and services to their needs. But it can also be used deliberately against consumers or at least to disadvantage them. A provider that knows a great deal about a person can use this information in order to make the most of their willingness to pay, for instance, also based on specific life situations (see also "Making the most of consumers' willingness to pay in order to maximise profits"). They can also deliberately exclude (groups of) customers by setting prices that exceed their willingness and ability to pay. In addition to the deliberate discrimination against certain consumers (or consumer groups), this could also lead to unintentional discrimination if the algorithm reaches discriminatory decisions even if the user has not explicitly programmed this.

Both types of discrimination and the question of how this can be avoided are addressed below:

Scenario:Making the most of consumers' willingness to pay in order to maximise profits

Let's imagine what online stores will look like in the future. As soon as the customer enters the store, they are offered a wide range of products and services that are almost always tailored to their preferences, life situation and current needs. The customer is impressed. They only need to click on "buy". The price is right even if it is close to the price that the customer is just about prepared to pay. However, as the product or service is specifically tailored to the customer, it is more difficult to compare prices directly.

This hypothetical scenario highlights another advantage that companies could have thanks to BDAI applications – in addition to growing product ranges and market shares. BDAI offers unprecedented opportunities to extract the consumer surplus.12 This would be a blessing for companies but a curse for users and consumers.

In particular, linking data on customer needs and preferences to financial and behavioural data using BDAI can provide deep insights into previously unknown consumer characteristics, such as their (situational) willingness and ability to pay. This private information can also be used against consumer interests. It is in the economic interest of consumers that at least their willingness to pay and (to a certain extent) their ability to pay are not disclosed to providers.

Otherwise, consumers or consumer groups may end up buying products that are overpriced when BDAI is used. BDAI can help companies gain detailed information on the maximum price that larger consumer groups are willing to pay – either because they themselves have this data or because they can buy it. There is a risk that companies will use this information specifically to increase earnings as extreme price differentiations (segment-of-one) can enable them to make a much higher profit (by setting higher prices) without having to fear decreasing sales volumes.

This does not concern the payment of higher prices for better or more suitable services, such as higher insurance premiums for hedging higher risks. This would be a different form of differentiation encouraged by supervisors. Rather, this relates to individual and situational pricing for products that are (almost) the same. BDAI applications could make it easier to develop (mass) individualised products and services at a low cost. Providers could add individualised components to standard products – at no additional cost – making it more difficult for consumers to compare or switch to other offers or providers.

What is clear is that price differentiation is neither prohibited per se nor illegitimate in principle. It is a key element of healthy competition, also in the financial sector. Although this is similar to the phenomenon described above, where risk adequacy is perfected on the basis of BDAI, differential price strategies in competition based on the customer segment, geographical location or life situations are, however, questionable if they result in extreme asymmetries in the context of BDAI. Put simply, how much of an uneven playing field can there be between omniscient providers and customers whose information is available and literally predictable before society starts to fight back and legislators and, in the case of financial service providers, regulators need to intervene? These complex issues will need to be discussed and the pros and cons will need to be evaluated. Financial regulation has been undergoing such cycles for decades. The industry would be well advised to anticipate them.

Deliberate discrimination

Linking information from a variety of sources using BDAI can help to reveal consumers' willingness and ability to pay for specific products and services with a relatively high level of precision. Another specific feature of BDAI applications is that characteristics that are not directly gathered can also be revealed. In simplified terms, if nine customer characteristics are available, the tenth characteristic no longer needs to be gathered because it can be derived from the nine characteristics with great precision. If this gives rise to discrimination, consumers would not be able to make a connection with the personal data they have provided. And they cannot do anything about this type of discrimination either. Companies have to ensure that there is no such discrimination and allow outsiders, such as supervisory authorities, to check this. Algorithm-based decisions need to be explainable. This is the only way to establish a corporate structure and culture that tackles discrimination effectively.

Unintentional discrimination

Even if companies or software developers have no bad intentions, algorithms can still display discriminatory behaviour or reach discriminatory decisions. Algorithms learn from data. And if this data suggests a discriminating view to the algorithm or suggests discriminatory decisions to reach the optimal solution, i.e. to maximise profits, (groups of) individuals could be discriminated against unintentionally. To solve this problem, technical approaches can be used, such as non-discriminatory data analysis and evaluation processes. In these processes, it is necessary to overcome the challenging hurdle of translating the ethical/legal concept of discrimination into a mathematical definition to ensure that discrimination can be verified with an algorithm and be prevented. There are currently many approaches and research projects on this but a generally accepted standard has so far not been established. Ultimately, companies need to ensure that algorithms are designed in a way that legal requirements are taken into account. They have to prevent erroneous or prohibited conclusions to be drawn from their models using appropriate monitoring and transparency mechanisms.

Summary

BDAI has the potential to fundamentally change financial markets. New processes can result in key competitive advantages. Companies will therefore hardly be able to avoid having to develop a strategy on how to deal with BDAI. Many will certainly invest in BDAI readiness to ensure that they and their systems are BDAI-ready. But there may also be companies that will find their niche, providing products and services that are labelled as "guaranteed BDAI-free”.

Neither supervisors nor regulators have yet found conclusive answers to all the questions raised due to BDAI. For this very reason, BaFin published its BDAI report in June 2018. It is now seeking an open dialogue with members of industry, the global regulatory community, academia and the press (see "Consultation").

Consultation

The report "Big data meets artificial intelligence – Challenges and implications for the supervision and regulation of financial services" is aimed at laying the foundations for in-depth discussions on big data and artificial intelligence. To this end, BaFin has invited a wide range of players, including companies and associations, other national and international supervisory authorities, representatives of academia and journalists, and consumers, to take part in the consultation on its report. Further information can be found at www.bafin.de.

The submitted responses will not be published individually but BaFin is planning to publish an anonymised and aggregated evaluation online.

The next issue of BaFinPerspectives will also cover the evaluation.

Footnotes:

  1. 1 Data-driven business models that use BDAI to increase value added have allowed a number of tech companies to rise among some of the highest valued companies worldwide.
  2. 2 BaFin, Big data meets artificial intelligence – Challenges and implications for the supervision and regulation of financial services, retrieved on 10 July 2018. The study was prepared in collaboration with PDBerater der öffentlichen Hand GmbH, Boston Consulting Group GmbH and the Fraunhofer Institute for Intelligent Analysis and Information Systems. Parts of the article "Supervision and Regulation in the Age of Big Data and Artificial Intelligence" are based on this report.
  3. 3 See also Section 3.3.
  4. 4 See also Section 4.
  5. 5 With this model, users are offered services which are supposedly free and cannot be competed with. But in fact, they are paying for these services by giving providers the right to use their data. What is particularly problematic is that many users are not sufficiently aware of the value of their data and thus the price that they are paying for this.
  6. 6 Extract from the German Monopolies Commission's Biennial Report XX (2012/2013), Chapter I, "Aktuelle Probleme der Wettbewerbspolitik", (only available in German), page 63, retrieved on 15 June 2018 .
  7. 7 See section 3.3.
  8. 8 See Section 3.3.
  9. 9 FZI Research Center for Information Technology, "Smart Data – Smart Privacy? Impulse für eine interdisziplinär rechtlich-technische Evaluation", pages 13-14, (only available in German), retrieved on 11 June 2018. This research paper was supported by the Federal Ministry for Economic Affairs and Energy (Bundesministerium für Wirtschaft und Energie) by resolution of the German Bundestag.
  10. 10 German Advisory Council for Consumer Affairs, "Digitale Souveränität – Gutachten des Sachverständigenrats für Verbraucherfragen", (only available in German), retrieved on 11 June 2018 .
  11. 11 German Advisory Council for Consumer Affairs, loc. cit. (footnote 10).
  12. 12 Consumer surplus is the difference between the maximum price that a consumer is willing to pay for a product or service and the price that they actually have to pay on the market.

Additional information

Did you find this article helpful?

We appreciate your feedback

Your feedback helps us to continuously improve the website and to keep it up to date. If you have any questions and would like us to contact you, please use our contact form. Please send any disclosures about actual or suspected violations of supervisory provisions to our contact point for whistleblowers.

We appreciate your feedback

* Mandatory field