Detecting and mitigating the ethical risks of AI in financial services

The role of artificial intelligence (AI) in financial services continues to exercise C-suite minds. Not least, how to balance AI’s benefits to enhance the customer experience and improve back-office operations with the ethical challenges that AI presents.

Sparking the debate is how financial organisations can employ AI tools such as machine learning (ML), large language models (LLMs), and generative AI while at the same time avoiding the risk of breaching data privacy, consent, and unfair lending decisions.

It’s a topic highlighted in the latest Forvis Marzars C-suite Barometer, which shows that three-quarters of C-suite executives in financial services express ethical concerns over AI, with 85% believing more AI regulation is needed.

High risk situations

Data privacy and consent concerns are not new risks for financial services. However, AI’s arrival does introduce some additional red flag situations. In particular, the sheer quantity and need for quality and integrity of big data for AI models can severely test cyber security programmes. Plus, more sensitive personal data held by financial services organisations presents a confidentiality issue requiring careful ethical consideration and management in an AI setting.

Consumer fears are a further consideration. While consumers are positive about what AI can achieve for society, they are less positive about its impact on access to credit and insurance products. In this respect, the quality of AI training and algorithms is critical in ensuring that the decision-making process is free from bias that leads to unfair decisions on access to financial products.

Assessing for bias and legality

Data dealing with identity and diversity needs to be scrutinised and handled with care so it does not lead the AI model down the discrimination pathway. The key is to ensure that any bias present in an AI model is detected and dealt with early. Sometimes, the less obvious data sources are the weakest links. For example, problems can arise when signing up for a financial product, particularly when a third-party provider treats some personal information as optional. However, if the AI model has been taught that all personal information is a requirement, it will exclude those consumers. The model will then replicate and amplify that decision to create an ongoing bias against consumers who may not have provided an optional request for a telephone number or email. 

A further ethical concern is to ensure that the information you are asking for is legally secure. For example, strict laws apply in countries within the European Union where a consumer’s religious beliefs or sexual orientation cannot form part of a financial decision. Frequent and continuous scenario testing is advised, along with keeping an open mind on where problems can arise and proactively monitoring regulatory developments.

Integrating ethical principles

Basic ethical principles can be applied to AI models at the development and training stage. These include principles such as not taking gender or disability into consideration. However, ensuring that the business model does not influence or create bias in ethical principles is also important. For example, a bank with products aimed at small businesses may want to assess information differently from a retail bank aimed at consumers. From an ethical perspective, however, there should be no distinction when training the AI model to disregard, say, gender when applying for financial products. Again, regularly testing data will help to proactively address ethical issues that emerge before they escalate to unfair outcomes. 

Knowing the source of data used to train AI models and how such data impacts decisions is equally essential. This may involve applying different AI layers that focus on the data source with a separate decision tree overlay. Separating AI elements gives organisations greater control over source data so that the quality, reliability and ethicality of decisions can be checked relatively quickly and more accurately.

Transparency, accountability and trust

Regulatory transparency requires financial organisations to ensure consumers know they are, for example, talking to a chatbot or that the information provided to them is AI-generated. Similarly, data transparency is crucial to understanding how AI decisions are made. Accountability is equally important, particularly when the best outcome is to override an AI-influenced decision. However, if the ability to account for decisions is not in place due to a lack of transparency or documentation requirements, the tendency to stick with the AI decision becomes more appealing despite it being the wrong decision.

To avoid such ethical dilemmas, an accountability framework should be developed based on transparency and strengthened by governance. This approach gives more trust to how a financial organisation picks up on and deals with ethical concerns when including AI in the financial product decision-making process.

Achieving balance through regulation

With 85% of respondents participating in the latest Forvis Mazars C-suite Barometer highlighting AI regulation as important, how the regulatory landscape reacts is essential. The AI Act, which has recently come into force, goes someway to answering ethical concerns. By placing safeguards on how AI can be used, such as limiting the use of biometric identification systems, putting bans on social scoring and implementing obligations that are proportionate to the assessed level of risk of each AI System, the Act helps to ensure that the force of the law can be used to balance ethical concerns with AI use. In particular, it takes a tiered approach to the different AI risks, with obligations proportionate to those risks.

However, with AI evolving rapidly, risk levels are becoming increasingly difficult to define, particularly where generative AI and LLMs are in play. As models evolve, regulation that addresses AI reasoning will also become increasingly required. Regulatory guidance is now a priority.

Putting the correct structures in place

As a highly regulated industry, financial services organisations are used to heavy scrutiny by regulators. A solid compliance framework certainly offers advantages when deploying AI solutions compared with other sectors. However, it can also be a trap in that overconfidence in compliance abilities does not consider how AI ethical concerns impact different parts of the business. 

A more collaborative approach involves developing an ethical committee that sits across compliance and technology divisions to ensure that AI models are built with ethical, compliance and technological know-how baked in at the outset. Extending AI knowledge and expertise within each division will ensure that each element of the AI puzzle is understood clearly and implemented effectively.

Finally, good governance is vital. It provides a framework for solid leadership and accountability, which is at the heart of the ethical use of AI in financial services. Proactively supporting AI developments with the right framework and approach will also help to build trust by ensuring that all consumers have fair access to financial products.

Get in touch

If you would like to speak with a member of our Financial Services team, please contact us using the button below.

Get in touch

Key contacts