Regulation laying down harmonized rules on AI and amending certain Union legislative act. Key findings

neural network, communication, intelligence-3322580.jpgOn April 21, 2021 the EU Commission unveiled the draft Regulation laying down harmonized rules on AI and amending certain Union legislative acts together with a new coordinated plan with Member State. The released draft reveals a comprehensive approach of an area which didn’t benefit of much framing up until the date. The draft comprises a set of stringent obligations applicable to different stakeholders involved in the AI supply chain, such as providers, users, importers and distributors of AI systems. Following the GDPR format the AI Regulation contains articles of law linked with suitable recitals. The overarching goal of the AI Regulation is to foster public trust in AI products through compliance checks and balances and use of AI in line with the EU values to encourage uptake of trustworthy and human-centric AI. The Regulation is considered necessary to both encourage the development of AI as a strategic key driver for the bloc economy and managing associated risks for individuals and society as a whole enabling what the EU President Ursula von der Leyen has named “A Europe fit for the Digital Age”.

 

Article 3.1. defines AI as a software developed through specific techniques and approaches (such as for example machine learning, deep learning, logic and knowledge-based approaches, deductive engines, statistical approaches, etc.) for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.

 

 

The Regulation has an extraterritorial effect and applies to providers of AI systems independent of their establishment in the Union or in a third country as long as they put into service an AI system in the Union or as long as the output of the AI system is used in the Union. The Regulation equally applies to users established within the Union as long as they procure and make use of AI systems in the Union. The Regulation equally applies to importers established in the Union placing on the market or putting into service an AI system that bears the name or trademark of a natural or legal person established outside the Union. Distributors that make an AI system available on the Union market are also in scope for complying with the Regulation. 

 

Article 5 of the Regulation contains a list of prohibited AI practices poised to pose risks to individuals
fundamental rights. Such practices are linked to:

 

  • AI systems that distort individual’s behaviour causing physical or psychological harm
  • AI systems that distort group of individuals behaviour causing physical or psychological harm 
  • AI systems used for social scoring of individuals consisting of a large-scale evaluation or classification of trustworthiness based on their social behaviour leading to certain forms of detrimental treatment of those individuals
  • AI systems using real-time remote biometric identification technology in publicly accessible spaces for the purpose of law enforcement, unless and in as far as such use is strictly necessary for one certain objectives such as: a) targeted search for specific potential victims of crime, including missing children; b) prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack; c) the detection, localization, identification or prosecution of a perpetrator or suspect of a criminal offence for whom a European arrest warrant was issued

High-risk AI systems. Under the Regulation providers and users of AI will need to determine whether a particular use-case is high-risk and thus whether they need to conduct a mandatory, pre-market compliance assessment or not.

 

According to the Regulation where an AI system is deemed to present a high-risk, providers have a transparency obligation toward the individuals who must be informed about the fact that they are going to interact with a high-risk AI system. On top of this, individuals must be made aware that their personal data is being processed by an emotion recognition system or a categorization system and that audio-visual content has been artificially created or modified (see chatbots) where the high-risk AI system generates images, audio or video that resembles existing persons, objects or events and falsely appear to be authentic (see deepfakes). Additional requirement related to algorithmic transparency apply even when the AI systems are not considered of high-risk but are still designed to interact with individuals. In this case the software design needs to ensure that affected individuals are notified that they are interacting with an AI system. 

 

Besides this transparency obligations providers of high-risk AI systems are required to observe adequate risk assessment and mitigation systems, high quality of training and testing data to minimize risks and discriminatory outcomes, documentation and record keeping ensuring traceability of results, human oversight, product safety, accuracy of outputs and security alongside the obligation to register such systems with the Commission.

 

Importers and distributers are required to place on the market only compliant AI systems. Compliant businesses are encouraged to display a ‘CE’ mark to help them win the trust of users and free access across the bloc’s single market. Therefore, according to Recital 67 of the Regulation: “High-risk AI systems should bear the CE marking to indicate their conformity with this Regulation so that they can move freely within the internal market. Member States should not create unjustified obstacles to the placing on the market or putting into service of high-risk AI systems that comply with the requirements laid down in this Regulation and bear the CE marking.” 

 

Although users might have fewer obligations, according to article 29.5 of the Regulation users of high-risk AI systems must carry out a data protection impact assessment under article 35 of the GDPR where personal data is involved and perform ongoing monitoring of the performance of AI systems.

 

Per article 71.1. “In compliance with the terms and conditions laid down in this Regulation, Member States shall lay down the rules on penalties, including administrative fines, applicable to infringements of this Regulation and shall take all measures necessary to ensure that they are properly and effectively implemented. The penalties provided for shall be effective, proportionate, and dissuasive. They shall take into particular account the interests of small-scale providers and start-up and their economic viability.”

 

Under the Regulation non-compliance with certain legal obligation can result in administrative fines which can go up to 6 % of an organization total worldwide annual turnover for the preceding financial year. 

 

Like in the case of the GDPR, the proposed AI Regulation leaves enforcement in the hands of Member States who will be responsible for designating one or more national competent authorities to supervise application of the oversight regime. Art. 59.2. provides that: “Each Member State shall designate a national supervisory authority among the national competent authorities. The national supervisory authority shall act as notifying authority and market surveillance authority unless a Member State has organizational and administrative reasons to designate more than one authority.”

 

It remains to be seen if leaving enforcement in the hands of the Member States will hamper a consistent applicability of the AI Regulation across the EU bloc or, if on the contrary, it will result into a host of misaligned applications and interpretations of the new bread AI Regulation.

Author: Petruta Pirvan, Founder and Legal Counsel Data Privacy and Digital Law EU Digital Partners