Accountability for High Risk AI Systems in the EU AI Act

brain, nature, mind-2750415.jpgThe three distinct legal regimes for AI systems: 

The AI Act describes itself as a risk-based legislation distinguishing between three distinct legal regimes for AI systems. Each regime lays down legal obligations that correlate with anticipated risks to public interests and values protected by EU law. 

 

The first regime covers AI systems raising unacceptable risks to EU values such as the respect to human dignity, democracy, or the protection of fundamental rights. The MEPs substantially amended the list to include bans on intrusive and discriminatory uses of AI systems such as:

  • Real-time remote biometric identification systems in publicly accessible spaces;
  • Post remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization;
  • Biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
  • Predictive policing systems (based on profiling, location or past criminal behaviour);
  • Emotion recognition systems in law enforcement, border management, workplace, and educational institutions; and
  • Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases (violating human rights and right to privacy).

Through these prohibitions, the AI Act adopts a precautionary approach, denying market access since the risks stemming from their use are deemed too high for risk-mitigating interventions. 

 

The second regime concerns AI systems that give rise to high risks to health, safety, and fundamental rights. These are: 

  • Biometric identification and categorisation of natural persons 
  • AI used for management and operation of critical infrastructure. 
  • AI used for educational and vocational training 
  • AI systems used in employment for i.e., recruitment or promotions 
  • Some AI systems used for law enforcement 
  • AI systems used for asylum and border control management 
  • MEPs expanded the classification of high-risk areas to include harm to people’s health, safety, fundamental rights, or the environment. 
  • They also added AI systems to influence voters in political campaigns and in recommender systems used by social media platforms (with more than 45 million users under the Digital Services Act) to the high-risk list.

AI systems that give rise to high risks to health, safety, and fundamental rights must be:

  1. First of all submitted to specific technical requirements which must be complied with before the AI system is placed on the market, put into service, or used. 
  2. Second, submitted to a complex post-market monitoring procedure which seeks to address risks not eliminated upfront, laying down obligations for providers of AI systems and other stakeholders such as national market supervisory authorities.
The third regime covers all other AI systems posing minimal risks and which are subject to other instruments of EU law. For example, AI systems used within consumer goods must comply with the General Product Safety Directive (GPSD), which lays down baseline requirements for consumer products not covered by any specific product safety regulation. 
 

Legal requirements for high-risks AI systems 

Hight risk AI systems can be placed on the market only after:
 
A. Assessments 
  • Fulfilling an ex-ante conformity assessment.
  • Undergoing a Conformité Européenne (CE) certification scheme.
  • Establishing a quality management system, which is subjected to independent audit.
  • Conducting a Fundamental Rights Impact Assessment (FRIA). 
  • Implementing post-market monitoring plans.
B. Risk Management. The AI Act mandates a continuous and iterative risk management system to be established for high-risk AI systems. This shall comprise of the following:
  • Identification and analyse of known and foreseeable risks
  • Evaluation of risks based on post-market monitoring, and
  • Adoption of suitable risk management measures for the same.
 C. Transparency. Providers of high-risk AI systems must ensure that the operation of their systems is sufficiently transparent and is accompanied with well-articulated instructions for use.
 
D. Reporting, Documentation and Notification.
  • Technical documentation that demonstrates compliance with the legislation before being placed on the market
  • Use of automated event-logging capabilities
  • Serious incidents or malfunctions that may qualify as a breach of the EU AI Act.

Integrating the Fundamental Risk Impact Assessment with the GDPR DPIA 

By applying the existing guidance and what we’ve learned from the GDPR compliance mechanisms we can conduct FRIAs in 4 phases, as follows: 
 
  1. Phase 1 describes the purpose for which the AI system is developed and define the tasks and responsibilities of the parties involved; 
  2. Phase 2 assesses the risks to fundamental rights that might occur at different stages of the AI system’s development; 
  3. Phase 3 justifies why the risks to rights infringements are proportionate; and 
  4. Phase 4 employs organizational and/or technical measures to reduce and mitigate the risks identified
In Phase 1, the organization works to provide a complete and detailed description of the purpose(s) for which the AI system will be used, and explains why it seeks to develop, purchase or deploy an AI system, and not another system entailing less risks to rights. 
 
The organization explains what problem(s) the system should solve, and elaborates the reasons, the underlying motives, and the intended effects of the envisaged AI system. 
 
These can be:
  • public interest purposes, such as efficiencies in terms of more accurate public service delivery or faster crime detection; or 
  • private interest, including commercial, marketing or economic purposes. 

Where the organization seeks to serve a range of purposes, it should rank and explain the purposes according to their priority in relation to the use of the envisioned AI system. In this Phase, the organization should also identify what fundamental rights might potentially be impacted by the intended AI system. The answers in Phase 1 form the basis for an organization’s responses in Phases 2–4.

 
In Phase 2 factors that could be assessed are:
 
  • Technical factors such as opacity of the AI systems, false positives/negatives, special categories of data, mechanisms to detect and address bias, etc. 
  • Organisational factors such as meaningful human intervention, transfers of personal data, stakeholders involvement in the FRIA, alignment of tasks and responsibilities 
  • Legal factors such as the nature of the rights impacted, notifications to individuals (that an AI system is used), individuals must be informed about their rights and given the means to exercise these, trade secrets disclosures.
In Phase 3 organizations must explain from a more general perspective why the reasons for using the system outweigh the potential impact on and harms to fundamental rights. Generally, the more likely it is that the system entails harmful effects, the more significant the purposes must be that an organization puts forward to justify their system. 
 
Each situation requires a context-specific balancing, whether applied in the context of public or private organization purposes. It is important to note that typical commercial purposes (i.e., to increase profit, to develop strategies to improve the organization’s market position, to conduct research into (new) client behaviour to identify new markets, to improve service delivery, achieve faster, cheaper and more efficient internal processing, and so forth) will often not by themselves be sufficient to justify the use of AI systems, specifically where risks to fundamental rights were identified in Phase 2.
 
Phase 4 involves organizations considering technical, legal and/or organizational measures to reduce the risks involved. Mitigating measures, examples: 
 
  • Prevent continuous data capture 
  • Minimise terms for data retention 
  • Consider on-device processing rather than centralized in-company processing 
  • Give individuals meaningful tools to manage processing 
  • Avoid third-party data sharing for commercial purposes 
  • Use AI systems that can be reviewed 
  • Treat all personal data as special categories of data
  • Implement meaningful human intervention 
  • Build in mechanisms for GDPR data subject requests 
  • Regularly evaluate the entire AI system to identify glitches 

Sources:

1. The EU AI Act: Between Product Safety and Fundamental Rights
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4308072 
2. Practical Fundamental Rights Impact Assessments
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4315823 

Author: Petruta Pirvan, Founder and Legal Counsel Data Privacy and Digital Law, EU Digital Partners