The world’s first comprehensive AI law, the AI Act was agreed upon

ai generated, children, globe-8426870.jpgThe AI Act is part of the larger EU digital strategy agenda alongside regulations such as the Data Governance Act, the Data Act, the Digital Services Act, and the Digital Markets Act.

The 125-page draft law to regulate AI was initially introduced in April 2021 and since then it garnered a reputation of the global model for regulating the technology.

 

Since then though Chat GPT has been developed and made it big on the market problem being that this type of AI was not mentioned in the draft law and was not a major focus of discussions until the agenda to close the discussions was on its edge.
Therefore, EU lawmakers prepared texts to address the gap, as tech executives warned that overly aggressive regulations could put Europe at an economic disadvantage. Fast forward, in the past few days, the E.U. lawmakers spent tens of hours in rounds of discussions and negotiations under the pressure of striking a deal before the EU Parliament election campaign starts in 2024. 
In the past days discussions focused on AI’s riskiest uses by companies and governments, including those for law enforcement and the operation of critical services infrastructure. Under the agreed upon texts ChatGPT chatbot and the like will face new transparency requirements. Chatbots and software that creates manipulated images such as “deepfakes” will have to make clear that what people are seeing is generated by AI. 
The layered risk approach was kept and according to a Parliament release the AI systems will be classified as follows: 
Unacceptable risk. Unacceptable risk AI systems are systems considered a threat to people and will be banned. They include:
  • Cognitive behavioural manipulation of people or specific vulnerable groups: for example, voice-activated toys that encourage dangerous behaviour in children.
  • Social scoring: classifying people based on behaviour, socio-economic status, or personal characteristics.
  • Real-time and remote biometric identification systems, such as facial recognition

High risk. AI systems that negatively affect safety or fundamental rights will be considered high risk and will be divided into two categories:

  • AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices, and lifts.
  • AI systems falling into eight specific areas that will have to be registered in an EU database:
  • Biometric identification and categorisation of natural persons
  1. Management and operation of critical infrastructure
  2. Education and vocational training
  3. Employment, worker management and access to self-employment
  4. Access to and enjoyment of essential private services and public services and benefits
  5. Law enforcement
  6. Migration, asylum, and border control management
  7. Assistance in legal interpretation and application of the law.

All high-risk AI systems will be assessed before being put on the market and throughout their lifecycle.

 

Generative AI. Generative AI, like ChatGPT, would have to comply with transparency requirements:

  • Disclosing that the content was generated by AI.
  • Designing the model to prevent it from generating illegal content.
  • Publishing summaries of copyrighted data used for training.

Limited risk. Limited risk AI systems should comply with minimal transparency requirements that would allow users to make informed decisions. After interacting with the applications, the user can then decide whether they want to continue using it. Users should be made aware when they are interacting with AI. This includes AI systems that generate or manipulate image, audio, or video content, for example deepfakes.

 

It seems that 2023 marked a race between regulators in Brussels, in Washington and elsewhere and A.I. development. 

 

While the regulators are racing to catch up with dazzling AI technological developments numerous concerns have been raised that the powerful technology will automate away jobs, turbocharge the spread of disinformation and eventually develop its own kind of intelligence. 

 

As Adam Satariano and Cecilia Kang have reported for the New York Time: “Nations have moved swiftly to tackle A.I.’s potential perils, but European officials have been caught off guard by the technology’s evolution, while U.S. lawmakers openly concede that they barely understand how it works.” 

 

Yet even as the law made its regulatory breakthrough questions remain about how effective it would be. Many aspects of the policy are not expected to take effect for 12 to 24 months, a considerable length of time for A.I. development. 

 

Author: Petruta Pirvan, Founder and Legal Counsel Data Privacy and Digital Law at EU Digital Partners