The Impact of GDPR on AI

colors, colorful, tile-2004497.jpgIn a recent publication, the EU Parliament’s research unit delves into the intricate relationship between the General Data Protection Regulation (GDPR) and the realm of Artificial Intelligence (AI). This comprehensive study explores the challenges and opportunities arising from the convergence of these two domains, shedding light on the ways in which law and technology can either counter risks or enable opportunities for individuals and society at large.

 

The study begins by examining the tensions and proximities between AI and key data protection principles outlined in GDPR, such as purpose limitation and data minimization

 

A thorough analysis of automated decision-making follows, evaluating its admissibility, necessary safeguard measures, and the potential right of data subjects to individual explanations. Moreover, the study scrutinizes the GDPR’s provision for a preventive risk-based approach, emphasizing data protection by design and by default.

 

While the GDPR does not explicitly mention AI, the study highlights the relevance of many of its provisions to the use of AI. 

 

The intersection reveals a tension between traditional data protection principles and the full deployment of AI’s capabilities, raising questions about the interpretation, application, and development of these principles in light of AI’s evolving landscape.

 

Several AI-related data protection issues remain unaddressed in the GDPR, prompting the need for guidance for controllers and data subjects. The report advocates for a broader societal, political, and legal debate on standards governing the processing of personal data using AI. 

 

This discussion should address the explanation, acceptability, fairness, and reasonableness of decisions about individuals, as well as determine which AI applications should be unconditionally barred and which ones may be admitted under specific circumstances and controls.

 

The study then examines key GDPR provisions relevant to AI:

 

  • Article 4(1): Personal Data (identification, identifiability, re-identification): Discussing the ‘re-personalisation’ of anonymous data and the inference of additional personal information facilitated by AI and big data.
  • Article 4(2): Profiling: Addressing processing accomplished using AI technology, though not explicitly mentioned in GDPR.
  • Article 4(11): GDPR Consent: Emphasizing the role of consent in the traditional understanding of data protection, particularly within the ‘notice and consent model.’
  • Article 5(1)(b): GDPR Purpose Limitation: Examining the tension between AI and the purpose limitation requirement and the legitimacy of repurposing data for new purposes.
  • Article 5(1)(d): GDPR Accuracy: Discussing the requirement for accurate data, especially when used as an output to an AI system.
Contrary to initial concerns that GDPR might be incompatible with AI and big data, the report suggests that GDPR is likely to be interpreted to reconcile both the protection of data subjects and the enablement of useful AI applications. 
 
It underscores the importance of oversight by competent authorities, complemented by civil society support, in addressing power relations and societal arrangements. 
 
Furthermore, the report advocates for a public debate and the involvement of representative institutions, pointing out the GDPR’s current lack of provisions for collective enforcement, which could be crucial for effective protection through collective actions for injunctions and compensations.
 

The Interplay between GDPR and AI

The report highlights a crucial aspect of GDPR’s limitations concerning collective enforcement, underscoring the significance of individual actions by data subjects. To address this, the study suggests that competent authorities’ oversight should be complemented by active involvement from civil society. Given the profound impact of AI on power relations, collective interests, and societal arrangements, the report advocates for a public debate and the engagement of representative institutions.
 

AI’s Evolution and Societal Implications

Over the past decade, AI has undergone rapid development, showcasing its potential for economic, social, and cultural progress. However, with these opportunities come serious risks such as unemployment, inequality, discrimination, and surveillance. The integration of AI with big data has further amplified these risks, leading to concerns about pervasive surveillance, manipulation, and social fragmentation.
 

AI and Personal Data

The study addresses the intersection of AI and personal data, emphasizing the transformative nature of AI applications in analyzing, forecasting, and influencing human behavior. While AI enables more precise and impartial decision-making, it also introduces the risk of discriminatory outcomes. The article underscores the societal significance of AI-based processing of personal data and its potential extremes in the form of ‘surveillance capitalism’ and a ‘surveillance state.’

 

Establishing a Normative Framework

To ensure responsible development and deployment of AI, the report calls for a comprehensive socio-technical framework that incorporates ethical and legal principles. The framework includes principles such as autonomy, prevention of harm, fairness, and explicability. Additionally, sector-specific regulations, including data protection, consumer protection, and competition laws, are deemed necessary to address the multifaceted legal issues arising from AI’s pervasive impact on European society.

 

GDPR Compatibility with AI

Although AI is not explicitly mentioned in GDPR, the report outlines how many of its provisions are relevant to AI applications. It acknowledges the tension between traditional data protection principles and the extensive power of AI and big data. The article suggests interpretations and applications of GDPR principles that align with the beneficial uses of AI while maintaining data protection standards.

 

Challenges and Opportunities

While acknowledging GDPR’s potential to balance data protection and societal interests, the report identifies challenges arising from vague clauses and open-ended standards. 

 

The principles of risk prevention and accountability are seen as directing the processing of personal data toward a ‘positive sum’ game, but the burden of establishing optimal solutions is placed on controllers. 

The report underscores the importance of clear guidance from data protection bodies to mitigate legal uncertainty and facilitate compliant solutions, particularly for smaller companies venturing into AI applications.

 

Policy Indications for GDPR and AI

  1. Alignment of GDPR with AI. The study acknowledges that the GDPR generally offers meaningful indications for data protection within AI applications. It emphasizes the interpretive flexibility of GDPR, suggesting that it does not inherently hinder the application of AI to personal data or disadvantage EU companies against global competitors.
  2. Avoiding Major Changes: Contrary to the fear of extensive amendments, the study asserts that the GDPR does not require substantial changes to accommodate AI applications. However, it highlights certain AI-related data protection issues that lack explicit answers in the GDPR, leading to potential uncertainties and costs.
  3. Guidance for Controllers and Data Subjects: Recognizing the importance of guidance, the study recommends providing controllers and data subjects with clear instructions on applying AI to personal data in line with GDPR principles. This guidance aims to prevent costs associated with legal uncertainty while enhancing compliance.
  4. Multilevel Approach to Guidance: The study emphasizes the need for a multilevel approach involving data protection authorities, civil society, representative bodies, specialized agencies, and all stakeholders to provide comprehensive guidance on AI applications.
  5. Broad Societal Debate: To address uncertainties, the study calls for a broad societal debate involving political and administrative authorities, civil society, and academia. This discussion should establish standards for AI processing of personal data, ensuring acceptability, fairness, and reasonability in decisions on individuals.
  6. Specific Guidance from Authorities: Political authorities, such as the European Parliament and the Council, are urged to provide general open-ended indications about values and ways to achieve them. Data protection authorities, including the Data Protection Board, should offer specific guidance on AI issues where the GDPR lacks clarity.
  7. Interpretation of Fundamental Principles: The study recommends interpreting fundamental data protection principles, such as purpose limitation and minimization, in a way that does not hinder the use of personal data for machine learning purposes. This includes the creation of training sets and algorithmic models for socially beneficial AI systems.
  8. Profiling and Automated Decision-Making: To address profiling and automated decision-making, the study suggests imposing an obligation of reasonableness on controllers, particularly when these processes lead to automated decisions. Controllers should also provide high-level explanations to users, allowing them to contest detrimental outcomes.
  9. Notification Obligations and Information Rights: It may be useful to establish obligations for controllers to notify data protection authorities of individualized profiling and decision-making applications. The study also stresses the importance of specifying the content of controllers’ obligations to provide information about the ‘logic’ of AI systems.
  10. Empowering Data Subjects: Ensuring the right to opt-out of profiling and data transfers, as well as the right to be forgotten, is crucial. Normative and technological requirements concerning AI by design and by default need to be specified to facilitate these rights.
  11. Combatting Data Abuse: Strong measures are recommended against companies and public authorities that intentionally abuse the trust of data subjects by using their data against their interests.

Conclusion

In conclusion, the study underscores the importance of a responsible and risk-oriented approach by controllers engaging in AI-based processing within the framework of GDPR. While acknowledging the complexities and gaps in the GDPR, the study advocates for a collaborative effort involving institutions, data protection authorities, and stakeholders to ensure consistent application of data protection principles in the rapidly evolving landscape of AI. By fostering trust and preventing risks, such an approach aims to contribute to the success of AI applications in harmony with data protection standards.
 

Author: 

Kosha Doshi, Final Year Student at Symbiosis Law School, Pune & Legal Intern Data Privacy and Digital Law at EU Digital Partners.