The three distinct legal regimes for AI systems:
The AI Act describes itself as a risk-based legislation distinguishing between three distinct legal regimes for AI systems. Each regime lays down legal obligations that correlate with anticipated risks to public interests and values protected by EU law.
The first regime covers AI systems raising unacceptable risks to EU values such as the respect to human dignity, democracy, or the protection of fundamental rights. The MEPs substantially amended the list to include bans on intrusive and discriminatory uses of AI systems such as:
- Real-time remote biometric identification systems in publicly accessible spaces;
- Post remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization;
- Biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
- Predictive policing systems (based on profiling, location or past criminal behaviour);
- Emotion recognition systems in law enforcement, border management, workplace, and educational institutions; and
- Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases (violating human rights and right to privacy).
Through these prohibitions, the AI Act adopts a precautionary approach, denying market access since the risks stemming from their use are deemed too high for risk-mitigating interventions.
The second regime concerns AI systems that give rise to high risks to health, safety, and fundamental rights. These are:
- Biometric identification and categorisation of natural persons
- AI used for management and operation of critical infrastructure.
- AI used for educational and vocational training
- AI systems used in employment for i.e., recruitment or promotions
- Some AI systems used for law enforcement
- AI systems used for asylum and border control management
- MEPs expanded the classification of high-risk areas to include harm to people’s health, safety, fundamental rights, or the environment.
- They also added AI systems to influence voters in political campaigns and in recommender systems used by social media platforms (with more than 45 million users under the Digital Services Act) to the high-risk list.
AI systems that give rise to high risks to health, safety, and fundamental rights must be:
- First of all submitted to specific technical requirements which must be complied with before the AI system is placed on the market, put into service, or used.
- Second, submitted to a complex post-market monitoring procedure which seeks to address risks not eliminated upfront, laying down obligations for providers of AI systems and other stakeholders such as national market supervisory authorities.
Legal requirements for high-risks AI systems
- Fulfilling an ex-ante conformity assessment.
- Undergoing a Conformité Européenne (CE) certification scheme.
- Establishing a quality management system, which is subjected to independent audit.
- Conducting a Fundamental Rights Impact Assessment (FRIA).
- Implementing post-market monitoring plans.
- Identification and analyse of known and foreseeable risks
- Evaluation of risks based on post-market monitoring, and
- Adoption of suitable risk management measures for the same.
- Technical documentation that demonstrates compliance with the legislation before being placed on the market
- Use of automated event-logging capabilities
- Serious incidents or malfunctions that may qualify as a breach of the EU AI Act.
Integrating the Fundamental Risk Impact Assessment with the GDPR DPIA
- Phase 1 describes the purpose for which the AI system is developed and define the tasks and responsibilities of the parties involved;
- Phase 2 assesses the risks to fundamental rights that might occur at different stages of the AI system’s development;
- Phase 3 justifies why the risks to rights infringements are proportionate; and
- Phase 4 employs organizational and/or technical measures to reduce and mitigate the risks identified
- public interest purposes, such as efficiencies in terms of more accurate public service delivery or faster crime detection; or
- private interest, including commercial, marketing or economic purposes.
Where the organization seeks to serve a range of purposes, it should rank and explain the purposes according to their priority in relation to the use of the envisioned AI system. In this Phase, the organization should also identify what fundamental rights might potentially be impacted by the intended AI system. The answers in Phase 1 form the basis for an organization’s responses in Phases 2–4.
- Technical factors such as opacity of the AI systems, false positives/negatives, special categories of data, mechanisms to detect and address bias, etc.
- Organisational factors such as meaningful human intervention, transfers of personal data, stakeholders involvement in the FRIA, alignment of tasks and responsibilities
- Legal factors such as the nature of the rights impacted, notifications to individuals (that an AI system is used), individuals must be informed about their rights and given the means to exercise these, trade secrets disclosures.
- Prevent continuous data capture
- Minimise terms for data retention
- Consider on-device processing rather than centralized in-company processing
- Give individuals meaningful tools to manage processing
- Avoid third-party data sharing for commercial purposes
- Use AI systems that can be reviewed
- Treat all personal data as special categories of data
- Implement meaningful human intervention
- Build in mechanisms for GDPR data subject requests
- Regularly evaluate the entire AI system to identify glitches
Sources:
Author: Petruta Pirvan, Founder and Legal Counsel Data Privacy and Digital Law, EU Digital Partners