AI Act. A safety framework constructed around risk categories

binary, one, cyborg-1536623.jpgOn the 14th of June the Members of the E.U. Parliament voted with a large majority the AI Regulation.

The EU AI Act is one of the first major laws to regulate artificial intelligence, and as Adam Satariano observes for the New York Times “a potential model for policymakers around the world as they grapple with how to put guardrails on the rapidly developing technology.

 

It’s worthwhile mentioning that the EU Parliament amendments are deviating from the European Commission original text in the sense that strict requirements meant for high-risk AI systems have been extended to other useful AI applications that pose very limited risks.

 

In fact, since the AI Act proposal happened, the technology has developed to the point of ChatGPT breaking the EU plan to regulate AI.

Therefore, discussions on systems trained on large data sets or large language models (LLM) have spurred amendments to the initial text with the EU Parliament wanting a clear distinction between general purpose AI (GPAI) and large language models, aiming at introducing a stricter regime for the latter.

 

What’s on the plate

  • LLM to keep appropriate levels of performance, explainability, corrigibility (by human input), safety, and cybersecurity throughout their lifecycle.
  • Employment of independent experts, documented analysis, and extensive testing for above purpose.
  • LLMs to be subjected to obligations to implement a quality management system to provide the relevant documents up to 10 years after the system is
    launched.
  • LLM systems to be registered on the EU database.
  • Generative AI (GenAI) systems to be submitted to heightened transparency obligations and safeguards against generating content in breach of EU law.
  • Disclosure obligations regarding the use of training data protected under copyright law.
  • Disclosure obligations regarding the computing power required and the training time of the model.

Let’s be very clear. Behind the hype, what is the Regulation meaning for companies?

The AI Regulation is a product safety framework constructed around risk categories.

The law creates new conformity obligations, largely based on the technicality of high-risk AI systems themselves, but also on the technicality of certification and standardization mechanisms. 

 

The legislator provides that high-risk AI systems must as a minima follow conformity assessment procedures before they can be placed on the Union’s market in order to minimize the following risks: confidentiality and data analysis, precision, robustness, resilience, explicability, non-discrimination.

 

Manufacturers of an AI systems will have to manage large amounts of data, including personal data for which they must guarantee their integrity and confidentiality.

 

Data governance obligations are part of the law if high-risk AI systems use techniques involving learning models with data based on training, validation, and test datasets. Datasets must be reliable, which is why the law imposes that training, validation and testing data sets to be subjected to appropriate data governance and management practices, such as the relevant design choices, annotation, labelling, cleaning, enrichment and aggregation, a prior assessment of the availability, quantity and suitability of the data sets that are needed, examination in view of possible biases, the identification of any possible data gaps or shortcomings, and how those gaps and shortcomings can be addressed, transparency on the source of the datasets, their reach and key characteristics, how the data were obtained and selected, etc.

 

Furthermore, the legislator establishes rules for a graduated risk management to help technical operators meet their obligations.

 

A risk management system consisting of an iterative, continuous process conducted throughout the life cycle of a high-risk AI system must be established, implemented, documented, and maintained.

The management system consists mainly of obligations to identify and minimize foreseeable risks, to manage residual risks and tests. Risk minimization is based on the adoption of suitable measures as reflected notably in relevant harmonised standards or common specifications.

 

Residual risks are considered acceptable, provided that the high-risk AI system is used for its intended purpose or under conditions of reasonably foreseeable misuse.

 

The user must be informed of these residual risks, while risk minimization or mitigation measures must be taken.

 

High-risk AI systems must be tested, to determine the most appropriate risk management measures at any time during the development process and, in any case, before being placed on the market or put into service, based on previously defined metrics and probabilistic thresholds adapted to the purpose of the high-risk AI systems.

 

Which are the High-Risks AI Systems?

  • Biometric identification and categorisation of natural persons
  • AI used for management and operation of critical infrastructure.
  • AI used for educational and vocational training
  • AI systems used in employment for i.e., recruitment or promotions
  • Some AI systems used for law enforcement
  • AI systems used for asylum and border control managemen
MEPs expanded the classification of high-risk areas to include:
  • Harm to people’s health, safety,
    fundamental rights or the environment
  • AI systems to influencing voters in political campaigns and in recommender systems used by social
    media platforms (with more than 45 million users under the Digital
    Services Act) 

Which are the prohibited AI systems?

AI systems with an unacceptable level of risk to people’s safety would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques, exploit people’s vulnerabilities or are used for social scoring (classifying people based on their social behaviour, socio-economic status, personal characteristics). 

 

MEPs sustantially amended the list to include bans on intrusive and discriminatory uses of AI systems, such as: 

  • Real-time remote biometric identification systems in publicly accessible spaces;
  • Post remote biometric identification systems, with the only exception of law
    enforcement for the prosecution of serious crimes and only after judicial
    authorization;
  • Biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
  • Predictive policing systems (based on profiling, location or past criminal
    behaviour);
  • Emotion recognition systems in law enforcement, border management, workplace, and educational institutions; and
  • Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases (violating human rights and right to privacy).

Next step

A trilogue stage will follow as an informal inter-institutional negotiation bringing together representatives of the European Parliament, the Council of the European Union and the European Commission. The aim of a trilogue is to reach a provisional agreement on the AI Act that is acceptable to both the Parliament and the Council, the co-legislators. This provisional agreement must then be adopted by each of those institutions formal procedures. Therefore, a final version of the law is not expected to be passed until later this year.

Author: Petruta Pirvan, Founder and Legal Counsel Data Privacy and Digital Law EU Digital Partners