ISO/IEC 42001:2023 as a A Commitment to Trustworthy AI

mars, mars rover, space travel-67522.jpgl. ISO/IEC 42001:2023

On December 18, 2023, the International Organization for Standardization (ISO) published the ISO/IEC 42001:2023 – Information Technology – Artificial Intelligence – Management System standard

ISO/IEC 42001:2023 has been developed by the International Organization for Standardization / International Electrotechnical Commission (ISO/IEC) Joint Technical Committee (JTC) 1, Information technology, Subcommittee (SC) 42, Artificial intelligence.

The standard is a comprehensive sector-agnostic management framework comprising of guidance for organisations to systematically address and control risks related to the development and deployment of AI system. 

The proposed risk management framework is articulated on a “Plan-Do-Check-Act” approach encouraging organizations developing and deploying AI systems to adopt policies and objectives, as well as processes to achieve the objectives specific to their AI systems with the goal to reduce potential harms, as follows:

  • Plan: Analysis and policy development
  • Do: Process Implementation
  • Check: Evaluation and audit
  • Act: Continual improvement 

ISO/IEC 42001:2023 body is relatively generic, but its 4 annexes comprise specific elements, as follows:

Annex A: Management guide for AI system development, including a list of controls, such as:

  • Policies related to AI.
  • Internal organization (e.g., roles and responsibilities, reporting of concerns)
  • Resources for AI systems (e.g., data, tooling, human)
  • Impact analysis of AI systems on individuals, groups, & society
  • AI system life cycle
  • Data for AI systems
  • Information for interested parties of AI systems.
  • Use of AI systems (e.g., responsible / intended use, objectives)
  • Third-party relationships (e.g., suppliers, customers)

Annex B: Implementation guidance for the AI controls listed in Annex A, including data management processes.


Annex C: AI related organizational objectives and risk sources, such as: 

  • Objectives: Fairness; Security; Safety; Privacy; Robustness; Transparency and explainability; Accountability; Availability; Maintainability; Availability and quality of training data; AI expertise
  • Risk sources: Level of automation; Lack of transparency and explainability; Complexity of IT environment; System life cycle issues; System hardware issues; Technology readiness; Risks related to ML.

Annex D: Domain and sector-specific standards.  

ll. AI System Impact Assessment 

The integration of the AI System Impact Assessment into organizational practices aligns with the principles outlined in ISO/IEC 42001:2023. 
 
Area of analyse in AI System Impact Assessment include below
 
  • Stakeholder Identification: Identifying and engaging with all stakeholders affected by AI systems, including individuals, communities, and relevant interest groups.
  • Impact Criteria Definition: Establishing clear criteria for assessing the impacts of AI systems, such as ethical considerations, privacy implications, and economic effects.
  • Risk Assessment: Conducting comprehensive risk assessments to identify potential negative consequences of AI deployment and use on individuals, groups, and societies.
  • Algorithmic Bias Evaluation: Evaluating AI algorithms for biases that may perpetuate discrimination or inequity among different demographic groups.
  • Data Privacy and Security: Assessing the privacy and security implications of AI systems, including data handling practices, encryption methods, and compliance with relevant regulations (e.g., GDPR).
  • Societal Well-being Analysis: Examining the broader societal implications of AI systems, including their effects on employment, social cohesion, and access to resources.
  • Ethical Considerations: Addressing ethical dilemmas associated with AI development and use, such as fairness, accountability, transparency, and human rights.
  • Regulatory Compliance: Ensuring compliance with relevant regulations and standards, including ISO 42001, to guide the responsible implementation of AI management systems.
  • Continuous Monitoring and Improvement: Implementing mechanisms for ongoing monitoring, evaluation, and improvement of AI systems’ impacts on individuals, groups, and societies.
  • Stakeholder Engagement: Facilitating open dialogue and collaboration with stakeholders throughout the AI system impact assessment process to ensure inclusivity, transparency, and trustworthiness.

lll. ISO/IEC 42001:2023 and the AI Act 

The AI Act sets out specific requirements for implementation of Quality Management Systems and implementation of Risk Management which may not be fully addressed in ISO/IEC 42001. Also, the definitions of risk set out in ISO/IEC 42001 standard, specifically combination of the probability of occurrence of harm and the severity of that harm might not necessarily align with the AI Act. 
 
Nevertheless, one of the most obvious incentives for adopting ISO/IEC 42001 standard is the benefit of presumption of conformity under Article 40 of the current draft EU AI Act. Article 40 of the draft EU AI Act states that high-risk systems which are in conformity with relevant harmonised standards shall be presumed to be in conformity with the relevant requirements for those systems underlined in Title III, Chapter 2. 

Author: Petruta Pirvan, Founder and Legal Counsel Data Privacy and Digital Law at EU Digital Partners