On 9th January 2024, the Department of Public Expenditure, NDP Delivery and Reform published their interim guidelines on the use of Artificial Intelligence (AI) in the public service. As AI continues to proliferate globally, an increased number of use cases across industries are emerging, including in the public sector.
Given the pivotal role that public services play in the lives of everyone in our communities, AI must be used responsibly without compromising human rights such as privacy and access to services. The guidelines set the tone for the use of AI in the public service and provide the building blocks on which a good AI governance framework should be built.
Over the last few months, we have been talking to clients about responsible AI and following the publication of the interim guidelines we have put together some of the most common questions we get asked:
1: These guidelines are interim, if we apply them will we have to make changes in the future?
While the guidelines are interim, they provide a solid foundation for establishing an AI governance framework. Waiting for the AI Act might not be advisable, particularly considering that numerous public sector bodies are already utilising AI in various forms.
The guidelines can be broken down into three main parts:
I. Principles:
The principles are taken directly from the EU Commission’s high-level working group on AI and may change over time but any such changes are not likely to be large. Additionally, the principles were included in the draft of the AI Act that was voted on by the European Parliament in June 2023, although they have been removed from the final version. This acceptance of those principles as being relevant for enabling responsible AI means that they are going to remain relevant into the future.
II. Risk management:
The AI Act imposes specific obligations on relevant actors based on the assessed risk level of the AI system in which they are involved. There are four levels of risk:
- Unacceptable risk
- High-risk
- Limited risk
- Minimal risk
Each risk level comes with distinct obligations, except for unacceptable risk, which are banned outright and should not be used.
While the Act mandates the establishment of a risk management system only for providers of high-risk AI systems, it is imperative that all organisations should be able to handle the risks associated with their use of AI. Failing to implement a robust risk management system may elevate the risk category of AI usage, potentially reaching high-risk or even unacceptable levels and the organisation would be subject to increased obligations, and potential penalties.
III. Safeguards and considerations:
The safeguards and considerations align closely with the principles outlined in the AI Act and have garnered significant attention and discussion globally. The imperative for human involvement echoes a similar requirement found in the GDPR, which mandates human oversight in instances of automated decision-making. Some of the considerations may undergo refinements and updates over time, especially as the AI Advisory Council establishes itself.
2: Will we have to make changes to existing processes in order to meet the guidelines?
AI is universally acknowledged as a transformative technology that has the capability to impact many aspects of our lives significantly. With this in mind, existing frameworks, governance processes, risk management procedures, change control etc., should be updated to reflect the possible impact of AI, and the principles of responsible AI should be embedded across all of those.
For instance, considering risk assessments, organizations typically evaluate technology, operational, and data protection risks. A common risk management tool, the Data Protection Impact Assessment (DPIA), primarily focuses on assessing the impact of processing personal data on individuals' rights and freedoms. To adequately address the nuances introduced by AI, this tool should be extended to encompass AI-specific assessments, integrating AI risks with existing data protection concerns.
Audit and risk committees need to adapt to the changing environment and ensure that they are equipped to provide proper oversight of AI risks. This adaptation may involve directing the business to make changes in reporting mechanisms and incorporating the principles of responsible AI into discussions about risks.
While numerous other examples exist where process adjustments may be necessary, the most substantial change lies in how the public sector approaches AI and embraces change.
3: How do the guidelines align with the upcoming AI Act?
The primary objective of the AI Act is to ensure the responsible use of AI across Europe, managing its impact on EU citizens. In this context, the interim guidelines demonstrate close alignment with the AI Act. While the AI Act operates as a product safety legislation, akin to the medical devices regulation, focused on managing risks associated with AI products, the guidelines are more operational and aim to serve as a foundational resource for public entities utilizing or intending to adopt AI. However, the content of the guidelines does closely follow many of the requirements of the AI Act, as stated in the above questions.
Using the principles as an illustrative example, even though they were initially part of the AI Act and subsequently removed, the previous version of the Act indicated that adherence to the obligations in Chapter 3 (requirements for providers of high-risk AI systems) would effectively translate into implementing these principles.
Furthermore, the guidelines also include the notion of risk assessment and high-risk AI. These are core elements of the AI Act that list several high-risk AI systems, some of which are directly related to the public sector and access to public services, including education.
4: Are there any specific dos and don’ts?
Do:
- Apply the principles.
- Undertake risk assessments
- Ensure there is always a human involved in the final decision.
- Evaluate accuracy and bias before implementing the system.
- Ensure you, and a third party if procuring the system, have built it using data that is compliant with the GDPR.
- Ensure relevant employees and team members have the required skills, knowledge and tools to use the AI system properly.
Don’t:
These are less available other than the opposite of the above dos. One of the most poignant don’ts is do not wait to get started. The sooner conversations start around responsible AI the quicker actions will happen and the more prepared you will be. At the same time do not move too fast. Ensure you have the correct controls and governance in place before introducing AI otherwise the risk of falling foul of regulation and worst case of causing harm to individuals increases.
5: What next?
Many public sector organizations may already be utilising AI in various forms, with some potentially unaware of its integration. To align with the upcoming guidelines and the anticipated AI Act, expected to be published in the first quarter, public sector bodies should initiate the following steps:
- Undertake a technology assessment.
Conduct a comprehensive assessment of existing technology systems to identify the extent of AI utilisation. - Develop AI inventory.
Create an inventory of AI systems within the organisation, ensuring a thorough understanding of the AI landscape. - Risk level identification.
Utilise the AI Act's risk levels to categorise and identify the risk level associated with each AI system in the inventory. - AI Act gap assessment.
Conduct an AI Act gap assessment to pinpoint areas necessitating changes within both business and governance structures.
Conclusion
The Government’s commitment to ensuring responsible AI should be welcomed by all and the guidelines need to be taken seriously as a strong building block on which public sector bodies can build their AI governance frameworks. Proper embedding of the principles combined with updating of processes will ensure the responsible rollout of ethical AI.