The Artificial Intelligence Act

On 12 July 2024, the Artificial Intelligence Act (Regulation of the European Parliament and the Council 2024/Decree 1689, shortened as AI Act) was published in the Official Journal of the European Parliament. The Regulation will enter into force on the 1st of August and will be applied gradually, but after 24 months, all EU Member States will have to comply with its provisions.

The AI Act is the first legal framework for Artificial Intelligence (AI) that addresses the risks of AI. It provides a regulatory framework for both its creators as well as users, thus removing the uncertainties that arise from the lack of regulation.

In fact, we are already using the myriad of possibilities offered by AI in many cases, despite the undeniable risks that it may entail. The new regulation seeks to minimize these risks, and some of its prohibitive provisions are explicitly designed to maximize the protection of users' rights.

In line with several other EU regulations, the AI Act adopts a "risk-based approach", which entails classifying activities and services based on this principle: high-risk, limited-risk, and minimal or no-risk AI technologies have been defined, with differentiated expectations as to their applicability, set by the legislators.

In particular, activities, infrastructures, and services that affect large numbers of people and where failures could have a very wide range of adverse effects are considered to be high-risk. The potential negative impact may not only be mental but also physical (harm to life and physical integrity).

This is considered to be the case, inter alia, in

  • Critical infrastructure (e.g. transport) that may endanger the life and health of citizens;
  • education or vocational training, which may determine access to education and career opportunities (e.g. marking an exam);
  • safety components of products (e.g. artificial intelligence application in robot-assisted surgery);
  • employment, employee management, and access to self-employment (e.g. CV-screening software for recruitment);
  • basic private and public services (e.g. credit scoring, which deprives citizens of access to credit);
  • law enforcement, which may violate people's fundamental rights (e.g. assessing the reliability of evidence);
  • migration management, asylum seeking, and border control (e.g. automated examination of visa applications);
  • justice and democratic processes (e.g. AI solutions to the search for court judgments).

High-risk AI systems will be subject to very strict obligations during their distribution and use: they will be subject to thorough testing before distribution and continuous monitoring of their use thereafter will also be essential.

For AI systems classified as limited risk, the AI legislation introduces specific transparency obligations. Pre-use information will be of utmost importance so that users can decide in advance whether they want to interact with a machine and be able to make an informed decision to continue or withdraw. Providers must also ensure that AI-based content can be identified. In addition, messages addressed to the public must clearly identify AI-based text as artificially generated.

Applications such as AI-enabled video games or spam filters are identified as minimal risk in the regulation. These will be available to use free of charge. A significant proportion of AI systems currently in use in the EU fall into this category.

Within the Commission, the European AI Agency was established in February 2024 to oversee, together with Member States, the enforcement and implementation of AI legislation.

Document

Legal Newsletter 2024/03.

Want to know more?