In this article by Gary Stakem, director of actuarial and risk services, he outlines the implications for insurers and how actuarial teams can adapt to meet new needs.
Evolving technology and skills
With AI set to cause disruption right across the insurance value chain, compliance with the AI Act will become a key discussion point in boardrooms for years to come. The journey from understanding the Act to achieving actual compliance will present significant challenges, demanding a broad range of skills. These include experience with complex models, handling large datasets, a deep understanding of the commercial and regulatory environments, and effective stakeholder communication. While all of these competencies are already central to the actuarial profession, some additional upskilling will be necessary.
For decades, actuaries have utilised large volumes of data, written code and built statistical models for assessing risk, pricing, and product design. This role now looks set to evolve following the recent surge in data availability and computer processing capacity. Actuaries are now transitioning towards providing oversight for machine learning analysis, ensuring algorithms are explainable, ethical, and compliant with regulatory standards.
Implications of the AI Act for insurers
The Act, which defines AI in the broadest terms, seeks to categorise AI systems into unacceptable risk, high-risk, limited-risk, and minimal-risk.
Unacceptable or prohibited risk systems include those intended to manipulate or deceive and those which exploit vulnerabilities.
High-risk systems are permitted but will be highly regulated under the Act. The list of high-risk systems explicitly includes risk assessment and pricing of life and health insurance, along with evaluation of an individual’s creditworthiness. Although not explicitly called out, general insurance systems that profile individuals could also be considered high-risk.
Role of actuaries in ensuring compliance
The Act will require insurers to employ staff with a sufficient level of AI literacy. Actuaries, with their robust experience in complex models, are uniquely positioned to bridge this gap. The legislation offers actuaries an opportunity to upskill, broaden their influence, and play a critical role in ensuring the responsible use of AI.
Actuaries are ideally placed to act as compliance officers for high-risk systems under the AI Act.
AI risk management systems
Under the Act, insurers must establish, implement, and maintain risk management systems for high-risk AI models. Actuarial teams can facilitate requirements to conduct model testing against defined metrics and probabilistic thresholds, ensuring models are performing consistently for their intended purpose. Actuarial judgment will be crucial in mitigating residual risk in pricing and underwriting models to acceptable thresholds.
Moreover, the Act requires that training, validation and testing datasets adhere to appropriate data governance practices, a concept well-developed in actuarial modelling to date.
Monitoring and feedback loops are key features of any robust risk management system and high-risk AI systems are no exception to that. The Act specifically references models which continue to learn after deployment. Actuaries must address how to prevent their models from learning biases post-production.
An actuary’s role in explainable AI (XAI)
Many actuaries working in the data science domain have already started developing underwriting risk assessment models by training neural networks (a specific type of machine learning model) on insurers’ own datasets. The recent rise of a specific type of neural network, Large Language Models (LLMs), has introduced a whole new world of capabilities. LLMs (which include ChatGPT) are designed to interact with human language and are trained on vast amounts of text data, including information available on the internet. Insurers are now on the cusp of combining their own internal data with the vast superpower of LLMs to develop risk prediction models of far greater sophistication than has ever been seen before.
While this technology is incredibly powerful, the models used are becoming increasingly complex and are subject to all the disinformation and biases on which they are trained. When it comes to risk rating, claims assessment, or any other insurance activities that directly impact consumer outcomes, it is critical that the models used are fair and unbiased. To ensure this is the case, models must be explainable. Given the nuances of AI applications in insurance, actuaries will play a pivotal role in explainable AI, or XAI.
XAI implications for actuarial models
Explainability is a cornerstone of the AI Act. Actuaries will face challenges in establishing XAI frameworks as traditional linear models give way to more complex non-linear ones. Overcoming these challenges is crucial to detect biases and ensure fairness in models.
In some limited circumstances, the AI Act will actually permit the processing of special category data (race, ethnic origin, religion, genetic data, political beliefs, etc.) in a test environment so biases can be detected and mitigated from live models. How this gets executed in practice will subject to much ethical debate.
Model explainability, though not new to actuaries, demands sharp governance practices. Key aspects of XAI include:
- Auditability: The Act requires technical documentation to be kept on all high-risk systems. Models must allow for automatic record-keeping and logging capabilities, with traceable records of decision-making processes.
- Intelligibility: Human oversight is a critical component of the Act. Models must be designed with “appropriate human-machine interface tools” that ensure a model’s inner workings can be understood.
- Transparency: The Act requires that high-risk systems allow deployers to interpret its outputs – another discipline that actuaries should be well-equipped to address.
- Reliability: The Act calls for systems to be accurate and robust – a challenge as actuarial models become more complex. Anyone who has even casually experimented with LLMs has probably already encountered obscure and hard-to-explain AI model hallucinations.
Additional skills for actuaries to consider
Compliance with the AI Act will be a top priority for insurers in the coming years and so actuaries with appropriate competencies will be much sought after.
Actuarial professionals looking to upskill should consider the following:
- Getting up to speed with related regulations such as the GDPR, Solvency II Directive, Insurance Distribution Directive, DORA, the Digital Services Act, the Consumer Protection Code, and differential pricing regulations in insurance.
- Brushing up on popular programming languages such as Python, R, and SQL.
- Obtain at least a foundation-level understanding of the cybersecurity and data protection principles that are fundamental to AI Act compliance.
- Developing broad stakeholder relationships. Ensuring compliance with the AI Act will require collaboration with sales, underwriting, risk, compliance, legal and IT departments. Communication with boards and regulators is also an essential skill.
What are the next steps for insurers?
- If not already in place, insurers should develop a digitlisation strategy document that is consistent with the wider business strategy.
- Map out all AI systems, both already in existence and planned under the digitlisation strategy.
- Establish the right cross-functional teams and identify any residual skill gaps.
- Develop training and awareness programmes.
- Create the governance forums, frameworks and roadmaps that will be required to implement AI systems in a compliant manner.
Conclusion
As the EU AI Act ushers in a new regulatory framework for insurers, actuaries are poised to play a central role. By leveraging their traditional skills and embracing new responsibilities, actuaries can become instrumental compliance officers overseeing the responsible use of AI.