The EU AI Act Unveiled: What businesses need to know for compliance and governance

The AI Act represents the first legal framework aimed at regulating Artificial Intelligence to safeguard EU citizens. Its impact will vary across organisations based on the level of risk associated with their AI systems and where they lie within the AI value chain. Regardless, all organisations will need to adapt their approach to managing AI risk.

The Act recognises several parties, such as providers, deployers, importers, and distributors. The main two parties are the providers, who are the developers of the AI system, and the deployers, who are the users of the AI system. There are several obligations on both parties, the most stringent on providers, yet deployers will need to do their due diligence to ensure providers have met their obligations.

There are four different levels of risk associated AI systems; unacceptable risk, high-risk, limited risk and minimal risk. Unacceptable risk systems are banned under the Act and carry the highest fines. High-risk systems carry the highest level of obligations for all parties. Read our article on the EU AI Act and the different risk levels of AI systems.

Timeline

The Act received resounding approval on the 13th of  March 2024 and will be published in the Official Journal of the European Union shortly. It will be adopted twenty-one days after publication. Given the rush to compliance that we have seen with other similar far-reaching legislation, such as the GDPR and DORA, organisations should start their AI projects now in order to ensure that all obligations are met within the timeline.

The enforcement of the Act is staggered with different requirements coming into force at different times.

  • Unacceptable risk AI systems will be banned after six months, by the end of 2024.
  • General-purpose AI systems have twelve months for compliance.
  • High-risk AI systems must meet obligations within twenty-four months, given their complexity.

To assist with the implementation, there are also deadlines for the provision of guidance from the AI Office, such as:

  • Serious incident reporting
  • Guidance on determining if an AI system is high-risk

Regulator and fines

One regulatory body has already been set up at European level, the AI Office. This office has several responsibilities set out in the Act including providing guidance to all parties.

The main Irish regulator has yet to be formally identified but we do know that there are market surveillance authorities that will regulate specific sectors. For example, the Central bank will be the market surveillance authority for AI systems that are used in financial services, insurance and banking.

The regulators will have several powers, such as taking an AI system offline or enforcing financial penalties:

  • General purpose AI up to €15m or 3% of global turnover
  • €15m or 3% of turnover for a range of infringements:
    • Providers compliance with Chapter II (includes risk management, cybersecurity, human involvement and more)
    • Deployers of AI systems compliance with Article 29
  • €35m or 7% of turnover for unacceptable risk AI systems
  • €7.5m or 1% of turnover for the supply of wrong information to notified bodies

Impact on Irish businesses

With the Act now finalised, there is a sense of clarity of over the responsibilities of businesses in Ireland. Those that were waiting to find out what their fate would be under the Act have to wait no longer and can start their AI journey knowing what they need to do to comply with the Act. It is likely that we will see an uptick in AI adoption because of this which will then impact on the skills shortage making it more pronounced. Some areas where upskilling is going to be required include:

  • Risk management
  • Auditing
  • Cyber security
  • Data governance
  • Education and training
  • Ethics

Another area where we will see an impact is in the governance of AI. The Act imposes several obligations related to risk and monitoring. Although these primarily target high-risk AI, it's possible for systems to unintentionally fall into this category through their usage, training, or development. It is vital then that organisations expand current compliance programmes to encompass AI and embed an AI system governance framework. This framework will scrutinise the model, monitor outputs, evaluate bias, assess algorithms, and more. Such governance is essential for effective risk management, whether developing an AI system internally or procuring one externally.

Given that most Irish businesses are likely to procure AI systems from third parties, they will be categorised as "deployers" under the Act. Although the requirements mainly pertain to high-risk AI systems, deployers must ensure that providers have the appropriate systems and processes to comply with the Act. They also need to ensure ongoing compliance with existing legislation, such as the GDPR.

Conclusion

Irish businesses are in a prime position to leverage the AI Act to integrate AI into their organisational models, ensuring they reap the benefits of AI and achieve a return on investment.

To accomplish this, it's crucial for organisations to understand their position in the AI value chain and the types of systems they currently use or plan to adopt. The initial step is to conduct an AI Act gap assessment, which involves identifying and classifying AI systems in use. From there, businesses can develop a roadmap to ensure compliance with the Act, thereby promoting responsible AI use.

Contact