adesso Blog

Regulation is driving AI-based innovations in banking! You didn't read that wrong. Even though, at first glance, regulation with strict rules and requirements is perceived as a brake on innovation, in many cases it sets the rules that ensure legal certainty and orientation. It thus creates a stable framework and builds trust, which is conducive to the emergence of innovations. Regulation can thus also be seen as a catalyst for creative solutions that can inspire the development of innovative AI applications and business models in the banking industry. In this blog post, I will take a closer look at the regulatory aspects that strengthen this symbiosis.

AI is entering the financial world

Artificial intelligence and machine learning are finding their way into the world of finance and promise new opportunities for growth and revenue, and thus a wide range of economic opportunities for banks and financial service providers. Automation and intelligent technologies can be used to optimise processes, evaluate large amounts of data, derive information and make decisions with the help of AI. However, in addition to a number of business and technological advantages, these processes can also favour risks or lead to wrong decisions if highly automated decision-making processes in the financial sector are carried out without human control. This entails risks and raises critical questions that prompt banking supervision and regulation to take action.

Fairness, transparency and risk management as a complex challenge

Fairness, transparency and appropriate risk management are considered the cornerstones of guidelines in the context of AI in finance. The BaFin, for example, sees itself as having a duty to define guidelines that are intended to cover both the potential and the limitations and risks of the technology. Banks are required by the supervisory authority to implement a robust risk management system and to meet transparency requirements in order to assess and minimise possible risks of AI applications. In addition, the risk of unjustified discrimination that can arise from the use of AI applications should be prevented. In its supervisory activities, BaFin therefore takes into account possible risks and discrimination that can arise from the automation of the financial industry. But what kinds of discrimination are conceivable in the financial world as a result of the use of AI?

Direct or indirect discrimination and automated bias

This occurs when individuals or groups of individuals are discriminated against on the basis of protected characteristics such as race, ethnic origin, national origin, gender, age, marital or family status, ideology, religion, sexual orientation or disability and other personal characteristics. Direct discrimination in the financial sector occurs, for example, when older people are disadvantaged in the provision of financial services because of their age. Not every unequal treatment is prohibited by law: A materially justified distinction based on age or income level constitutes permissible unequal treatment. However, care must be taken with the selection of personal characteristics that are inadmissible under the General Data Protection Regulation when processing data and that must also be taken into account when using AI in banking. Likewise, under the terminology of ‘algorithmic fairness’, the design of the algorithm must ensure that individuals and groups of individuals are treated equally in AI-based applications. This is also intended to minimise the emergence of so-called automated biases by preventing the systematic distortion of results through the use of AI. It should be ensured that the avoidance of any form of discrimination is taken into account from both an ethical and a legal perspective in various business processes such as credit checks, applicant screening and fraud detection.

BaFin regards compliance with the guidelines as an integral part of the governance requirements. The governance processes must therefore also be adapted and supplemented with regard to the use of AI. This enables BaFin to record unjustified discrimination in the context of the use of AI under supervisory law and demand compliance with it. Banks must clearly and transparently regulate their responsibilities for the use of AI applications and strengthen the risk awareness of the employees entrusted with their development and use through training and awareness-raising.

The European view of AI – the EU AI Act

The European Regulation on Artificial Intelligence (AI Regulation) came into force on 1 August 2024. It lays down the legal framework for the use of AI in all EU member states. The aim of the legislation is to uphold fundamental rights, set uniform safety standards for the use and handling of AI systems, and create trust in the new technology.

Classification of AI systems by risk potential

A key requirement of the regulation is the categorisation of AI systems according to their estimated risk potential. The regulation takes a risk-based approach and distinguishes between four levels of risk:

  • ‘Unacceptable risk": This class includes AI systems that are considered disproportionately risky to the fundamental rights and safety of citizens. It implies a ban on AI systems with unacceptable risk, such as social evaluation systems like social scoring.
  • ‘High risk‘: These AI systems are allowed, but are subject to strict requirements. Companies must have a compliance assessment carried out by an independent body to ensure that these systems meet legal requirements in terms of transparency, fairness and security. AI systems are approved if they meet the prescribed requirements.
  • ‘Limited risk’: For these AI systems, there is an obligation to inform consumers (end users). Examples of these applications include AI chatbots for customer support.
  • ‘Minimal risk": AI systems in this category are subject to the fewest requirements. Examples of such AI use cases include video games or spam filters.

Financial institutions that integrate AI-based applications and models into their services are obliged to carry out a detailed risk assessment of these models. They must ensure that their AI systems meet the legal requirements for transparency, fairness and security. At the same time, the legal framework should support the creation and promotion of innovation. AI systems that are used to check the creditworthiness or assess the credit rating of natural persons, as well as to assess risk and set prices (e.g. in life and health insurance), are considered high-risk AI systems (HRAI).

What can banks do now?

In summary, it can be said that when using AI applications, banks and financial institutions must take measures to prevent unjustified discrimination against consumers through the use of AI. To this end, review processes must be established to identify sources of discrimination and eliminate them through targeted measures. To ensure appropriate implementation, reliable and transparent data governance and data management are required. The aim is to guarantee fair, transparent and non-discriminatory treatment of consumers. In addition, the targeted development of AI expertise and the systematic training and further education of employees and decision-makers should increase the AI readiness of banks in their dealings with AI.

Conclusion

Regulation is not evil and is by no means an obstacle to innovation. With clear requirements, guidelines and guardrails, it ensures stability and orientation in dealing with AI and supports the establishment of a culture of innovation through a legal framework and protective measures. In doing so, it creates trust and reliability. Regulation thus acts as a catalyst for the emergence of stable and legally secure AI applications in banking.

You can find more exciting topics from the adesso world in our previously published blog posts.

Generative AI in banking

A driver of innovation for future-proof business models

Find out how GenAI is revolutionising the banking industry: from optimising processes to improving the customer experience. On our website, we show you how innovative AI technologies are making banks more efficient and future-proof.

Learn more

Picture Nehir Safak-Turhan

Author Nehir Safak-Turhan

Nehir is Senior Business Developer for Line of Business Banking at adesso – and an economist out of passion. Recognising banking and industry-specific correlations and transforming this information into intelligence is her daily bread. Throughout her twenty-year career in banking and IT, in keeping with Sesame Street’s principle ‘asking questions is a good way of finding things out’, she has never stopped asking questions to find the answer she’s looking for.


Our blog posts at a glance

Our tech blog invites you to dive deep into the exciting dimensions of technology. Here we offer you insights not only into our vision and expertise, but also into the latest trends, developments and ideas shaping the tech world.

Our blog is your platform for inspiring stories, informative articles and practical insights. Whether you are a tech lover, an entrepreneur looking for innovative solutions or just curious - we have something for everyone.

To the blog posts

Save this page. Remove this page.