Artificial Intelligence Act: innovation, risks, and protection of fundamental rights

17 Gennaio 2024

In April 2021, the European Commission (the “Commission”) proposed the regulation laying down harmonised rules on artificial intelligence (the Artificial Intelligence Act, briefly the “AI Act”), the first european regulation on artificial intelligence (“AI”). In the following years, the Commission proposed other legal initiatives, connected to the AI systems, such as, in September 2022, the Proposal for a Directive on adapting non contractual civil liability rules to AI (the “AI Liability Directive”) and a proposal for a Directive on liability for defective products, that shall revise the Directive 85/374/EEC in light of developments related to new technologies, including AI (the “New Product Liability Directive”).

Regarding the AI Act, on December 6, 2022, the Council reached an agreement on the AI Act and, in mid-June 2023, it started interinstitutional talks with the European Parliament. In Decembre 2023, the Council presidency and the European Parliament’s negotiators reached a provisional agreement on the AI Act. Subsequently and since the AI Act has raised multiple questions, on December 12, 2023, the Commission answered through a dedicated "Q&A" document (available here) and the  report was then updated on December 14, 2023.

An overview of the AI Act, considering the Q&A's content

The AI Act aims to promote responsible and ethical use of AI while ensuring that health, safety, and fundamental rights of EU citizens are respected. It shall also protect democracy, rule of law and the environment. Furthermore, it aims to stimulate investment and innovation on AI in Europe. The Regulation will apply to both private and public actors inside and outside the EU as long as the AI system is placed on the Union market, or its use affects people located in the EU. The main idea is to regulate AI based on the latter’s capacity to cause harm to society following a “risk-based” approach: the higher the risk, the stricter the rules. As soon as the approval is reached, the AI Act will be the world’s first comprehensive artificial intelligence law.

  1. Scope of application

The AI Act will apply to providers, deployers of high-risk AI system and importers of AI systems. Conversely, the Regulation shall not apply to providers of free and open-source models – which, nonetheless, should not be confused with GPAI providers with systemic risks. Furthermore, it will not apply to systems used exclusively for military, defence, or national security. Similarly, the AI Act will not apply to AI systems used for the sole purpose of research and innovation.

  • Different obligations for different risk levels

The new provisions shall establish obligations for providers and users depending on the level of risk from AI.

Specifically, a limited number of systems shall be banned because they may violate fundamental rights. By way of example and not exhaustively, these include AI system for social scoring for public and private purposes, and AI system that exploit the vulnerabilities of the individuals using subliminal techniques. These systems shall fall into the category of unacceptable risk and shall be prohibited.

Other AI systems shall be considered high-risk. A list of these systems is annexed to the AI Act, in Annex III, and this list can be updated by the Commission to align it with the evolution of AI use cases. The list shall contain systems that potentially can create an adverse impact on people’s safety or their fundamental rights.

The following are examples of high-risk AI systems:

  • education and vocational training, e.g. to evaluate learning outcomes, steer the learning process, and monitoring of cheating;
  • employment, workers management and access to self-employment, e.g. to place targeted job advertisements, analyse and filter job applications, and evaluate candidates;
  • access to essential private and public services and benefits (e.g. healthcare), creditworthiness evaluation of natural persons, and risk assessment and pricing in relation to life and health insurance;
  • systems used in the fields of law enforcement, border control, administration of justice, and democratic processes;
  • evaluation and classification of emergency calls;
  • biometric identification, categorisation, and emotion recognition systems (outside the prohibited categories).

In the case of high-risk AI system, the provider – before placing the system on the EU market or otherwise putting into service – shall carry out a conformity assessment to demonstrate that its system complies with the mandatory requirements for trustworthy AI. The assessment shall be repeated if the system or its purpose are substantially modified.

Moreover, the provider of high-risk AI system shall also have to implement quality and risk management systems to ensure its compliance with the new requirements and minimise risks for users and affected persons, even after a product is placed on the market.

In the case of minimal risk AI system, which include the majority of AI systems currently in use or likely to be used in the EU (such as AI-enabled video games or spam filters), these systems can be used without additional legal obligations, beyond those mandated by the currently applicable legislation.

In addition, minimal-risk system providers may choose, on a voluntary basis, to implement the requirements for trustworthy AI and adhere to voluntary codes of conduct. The adherence to codes of conduct can be undertaken not only by providers of minimal-risk systems providers but, according to the Commission, also by all providers of non-high-risk applications. In this regard, providers can ensure that their AI system is trustworthy by adhering to codes of conduct adopted by other representative associations or even by developing their own voluntary codes of conduct.

Furthermore, for certain AI systems specific transparency requirements shall be imposed, for example where there is a clear risk of manipulation (e.g. via use of chatbots). Users shall be aware that they are interacting with a machine.

Additionally, the AI Act considers systemic risks which could arise from general-purpose AI models (e.g. OpenAI’s GPT-4, Google DeepMind’s Gemini, Bard, etc.), including large generative AI models, because they are very capable or widely used (the “GPAI”). The GPAI are systems that can be used for a variety of tasks and are becoming the basis for many AI systems in the EU. For now, GPAI models that were trained using a total computing power of more than 10^25 FLOPs (Floating Point Operations Per Second – i.e. a measure of computer performance) are considered to carry systemic risks, given that models trained with larger compute tend to be more powerful.

The AI Act shall require to GPAI providers to ensure compliance with copyright law, assess and mitigate risks, report serious incidents, conduct state-of-the-art tests, and model evaluations, ensure cybersecurity, and provide information on the energy consumption of their models. As a central tool to detail out such rules and to demonstrate compliance with the relevant obligations from the AI Act, GPAI providers shall be engaged with the European AI Office (“AI Office”) to draw up Codes of Conduct.

  • AI Act and fundamental rights

Moreover, since the application of the AI Act shall comply with fundamental rights legislation, deployers which are bodies governed by public law or private operators providing public services and operators providing high-risk systems shall conduct a fundamental rights impact assessment and shall notify the results to the national authority[1].

The assessment shall contain a description of some specific information (e.g. the deployer’s processes in which the high-risk AI system will be used; the period of time and frequency in which the high-risk AI system is intended to be used; the categories of natural persons and groups likely to be affected by its use; the specific risks of harm likely to impact the affected categories of persons or group of persons; the implementation of human oversight measures; and the measures to be taken in case of the materialization of the risk).

If the provider already met this obligation through the data protection impact assessment according to Article 35 of the GDPR, the fundamental rights impact assessment shall be conducted in conjunction with that data protection impact assessment.

The Regulation also aim to ensure adherence to the principles of non-discrimination. The new mandatory requirements for all high-risk AI systems will serve this purpose. Indeed, those systems will have to be developed in such a way as to avoid discriminatory effects and unfair bias and support diversity, non-discrimination, and equity: they will need to be trained and tested with sufficiently representative datasets to minimise risk of unfair biases embedded in the model and ensure that these can be addressed through appropriate detection, correction and other mitigating measures.

The defence of fundamental right also regards environmental protection and sustainability. In fact, the Commission shall request the European organisations a standardisation deliverable on reporting and documentation processes to improve AI systems resource performance, such as reduction of energy and other resources consumption of the high-risk AI system during its lifecycle, and on energy efficient development of GPAI. In addition, providers of GPAI, which are trained on large data amounts and therefore prone to high energy consumption, are required to disclose energy consumption. Moreover, such models need to assess energy efficiency.

  • Application of the AI Act

Following its adoption by the European Parliament and the Council, the AI Act shall enter into force on the 20th day following that of its publication in the official Journal, and it will be fully applicable 2 years after entry into force, with a graduated approach as follows:

  • provisions regarding prohibited systems will become applicable 6 month after the effective date;
  • obligations for General-purpose AI models (“GPAI”) will become applicable 12 months after the effective date;
  • obligations for high-risk systems defined in Annex II will become applicable 36 months after the effective date.

The application and the enforcement of the AI Act shall be carried out by the Member States, which hold a key role. For this purpose, each Member State shall designate one or more national competent authorities that will supervise the application and implementation of the AI Act, as well as carry out market surveillance activities. Moreover, in order to increase efficiency and to set an official point of contact with the public and other counterparts, each Member State shall designate one national supervisory authority, which will also represent the country in the European Artificial Intelligence Board – the authority which has extended tasks in advising and assisting the Commission and the Member States.

Moreover, whereas the AI Act shall introduce different level of risks and provides clear definitions, it also leaves the concrete technical solutions and operationalisation primarily to industry-driven standards: in this way the legal framework can be adapted to different use cases and new technologies.

Conclusions

Although the AI Act has not been approved yet, it could be better for business operators to already start to map out their AI compliance strategy as of now. Indeed, some of the obligations outlined above (i.e. provisions regarding prohibited systems and obligations for GPAI) are scheduled to be implemented earlier than the general deadline of 24 months following adoption. These strategies, shall imply, among other things, conducting preliminary assessments about the AI systems models that are intended to be developed, commercialized and/or adopted, the uses of such systems, as well as the associated risks.

Indeed, the business operators shall consider, among others, that every AI Act infringement will lead to the application of a financial penalty by the Member States, which shall be effective, proportionate, and dissuasive, and may be parameterized to the total worldwide annual turnover of the preceding financial year.

Specifically, penalties are expected to be equal:

  • up to €35 million or 7% of the turnover, for infringements on prohibited practices or non-compliance related to requirements on data;
  • up to €15 million or 3% of the turnover, for non-compliance with any of the other requirements or obligations of the AI Act, including infringements of the rules on GPAI;
  • up to €7,5 million or 1,5% of the turnover, for the supply of incorrect, incomplete, or misleading information to notified bodies and national competent authorities in reply to a request.

For infringements committed by SMEs, it is provided a proportional penalty reduction.

Moreover, it shall be considered that, if an individual is affected by a rule violation, he/she can lodge a complaint with a national authority, which itself can launch market surveillance activities.


[1] All high-risk AI systems will be assessed before being put on the market and also throughout their lifecycle.

2024 - Morri Rossetti

I contenuti pubblicati nel presente sito sono protetti da diritto di autore, in base alle disposizioni nazionali e delle convenzioni internazionali, e sono di titolarità esclusiva di Morri Rossetti e Associati.
È vietato utilizzare qualsiasi tipo di tecnica di web scraping, estrazione di dati o qualsiasi altro mezzo automatizzato per raccogliere informazioni da questo sito senza il nostro esplicito consenso scritto.
Ogni comunicazione e diffusione al pubblico e ogni riproduzione parziale o integrale, se non effettuata a scopo meramente personale, dei contenuti presenti nel sito richiede la preventiva autorizzazione di Morri Rossetti e Associati.

cross