Artificial intelligence (AI) in risk management

On 4 November 2024 our Risk Management and Digital and Artificial Intelligence (AI) Advisory experts held a live Q&A webinar. Here is a summary of what we discussed around AI.

This session focused on two main areas: the evolving risk universe and the challenges it presents to firms and risk functions, and a deep dive into the management of AI risks.

AI in risk management

AI was the second area of focus discussed and it was clear that this risk is at the forefront of senior management minds. Through the establishment of generative AI, the industry has already observed a high level of innovation, development and opportunities with some interesting use cases. Some industries are focusing primarily on productivity and efficiency gains, but others are taking a more sceptical approach. However, since approximately 80% of businesses are expected to use Generative AI models, applications and APIs by 2026, it is important that firms understand and control the risks associated with these tools. 

Still, the questions are: what does this mean for financial services firms, and how can we best tackle the risks that stem from AI utilisation?

Challenges firms face as a result of AI

  • The “Black Box problem” - Lack of understanding and skill when it comes to AI utilisation generally because the model is either too complex for human comprehension or it is closed and safeguarded by intellectual property.
  • Limited controls and governance around AI management – Firms must ensure that roles and responsibilities around AI risk management are clearly defined. Without this, there is potential for scope creep between the first and second lines, meaning the second line isn’t able to provide effective challenge and oversight. 
  • Data management & Data governance – Without understanding the quality and source of the data being ingested by the AI model, firms may struggle to understand the output. As a result, complying with other key principles and requirements (such as lawfully processing data, explainability, transparency and auditability) can become difficult.
  • Third-party vendors and due diligence – Firms need a clear understanding of how AI risk will affect their business through third-party relationships and should proactively review their vendor inventories to identify those that provide AI solutions of components. Additionally, contract reviews, liability clauses, insurance and reporting protocols across the chain will need to be considered.

What should firms do?

  • Strategy and planning – Firms must develop a clear strategy around AI (what are they aiming to achieve, is this realistic etc.) as well as a clear strategy for managing the risks associated with AI to avoid reactive and siloed decisions driven by external pressures which may not be of interest to the firm in the longer term.
  • Develop a robust AI governance framework – Firms need to be clear on how AI aligns with their business model and strategy and invest in the necessary infrastructure to implement AI effectively. The strategy to mitigate the negative impact of AI should be integrated into the broader risk management framework and supported by a clear risk appetite.
  • AI culture – Ways of working and thinking about risk may need to evolve given the nature of AI
  • Prioritise AI readiness: Firms must ensure data quality, infrastructure scalability, and compliance integration.

Conclusion

As AI technology becomes more advanced and complex, firms must establish a clear strategy that aligns with their risk appetite and management principles to appropriately balance the opportunities and risks involved.

Get in touch with our Financial services team

If you would like to speak with a member of our Financial services team, please contact us using the button below.

Contact us today

Key contacts