l. ISO/IEC 42001:2023
The International Organization for Standardization (ISO) ISO/IEC 42001:2023, is breaking the black box in ensuring trustworthy AI.
The ISO/IEC 42001:2023 AI Risk Management Framework was developed as a standard system for AI risk management by a ISO/IEC specialised subcommittee within the Joint Technical Committee.
The standard framework provides a sector-agnostic guidance for organizations to manage and control risks in AI system development and deployment. Furthermore, the framework follows a “Plan-Do-Check-Act” approach, by encouraging organizations to adopt policies and objectives to minimize potential harms.
In summary the Plan-Do-Check-Act is described below:
Plan-Do-Check-Act
- Plan: Analysis and policy development
- Do: Process Implementation
- Check: Evaluation and audit
- Act: Continual improvement
Although the ISO/IEC 42001:2023 body is relatively generic, its 4 Annexes comprise specific elements. Here is a breakdown per Annexes.
Annexes
- Annex A: Management guide for AI system development, including a list of controls, such as:
- Policies related to AI.
- Internal organization (e.g., roles and responsibilities, reporting of concerns)
- Resources for AI systems (e.g., data, tooling, human)
- Impact analysis of AI systems on individuals, groups, & society
- AI system life cycle
- Data for AI systems
- Information for interested parties of AI systems.
- Use of AI systems (e.g., responsible / intended use, objectives)
- Third-party relationships (e.g., suppliers, customers)
- Annex B: Implementation guidance for the AI controls listed in Annex A, including data management processes.
- Annex C: AI related organizational objectives and risk sources, such as:
- Objectives: Fairness; Security; Safety; Privacy; Robustness; Transparency and explainability; Accountability; Availability; Maintainability; Availability and quality of training data; AI expertise
- Risk sources: Level of automation; Lack of transparency and explainability; Complexity of IT environment; System life cycle issues; System hardware issues; Technology readiness; Risks related to ML.
- Annex D: Domain and sector-specific standards.
ll. AI System Impact Assessment
Altogether, integrating AI System Impact Assessments into organizational practices aligns with the principles outlined in ISO/IEC 42001:2023.
In addition, the areas of analyse in AI System Impact Assessment must include below information:
- Stakeholder Identification: Identifying and engaging with all stakeholders affected by AI systems, including individuals, communities, and relevant interest groups.
- Impact Criteria Definition: Establishing clear criteria for assessing the impacts of AI systems, such as ethical considerations, privacy implications, and economic effects.
- Risk Assessment: Conducting comprehensive risk assessments to identify potential negative consequences of AI deployment and use on individuals, groups, and societies.
- Algorithmic Bias Evaluation: Evaluating AI algorithms for biases that may perpetuate discrimination or inequity among different demographic groups.
- Data Privacy and Security: Assessing the privacy and security implications of AI systems, including data handling practices, encryption methods, and compliance with relevant regulations (e.g., GDPR).
- Societal Well-being Analysis: Examining the broader societal implications of AI systems, including their effects on employment, social cohesion, and access to resources.
- Ethical Considerations: Addressing ethical dilemmas associated with AI development and use, such as fairness, accountability, transparency, and human rights.
- Regulatory Compliance: Ensuring compliance with relevant regulations and standards, including ISO 42001, to guide the responsible implementation of AI management systems.
- Continuous Monitoring and Improvement: Implementing mechanisms for ongoing monitoring, evaluation, and improvement of AI systems’ impacts on individuals, groups, and societies.
- Stakeholder Engagement: Facilitating open dialogue and collaboration with stakeholders throughout the AI system impact assessment process to ensure inclusivity, transparency, and trustworthiness.
lll. ISO/IEC 42001:2023 and the AI Act
The AI Act establishes specific requirements for implementing Quality Management Systems. Additionally, the Act defines requirements for an AI Risk Management Framework, which may not be comprehensively covered in ISO/IEC 42001.
Also, the definitions of risk set out in ISO/IEC 42001 standard. Specifically, the wording combination of the probability of occurrence of harm and the severity of that harm might not necessarily align with the AI Act.
Nevertheless, one of the most obvious incentives for adopting ISO/IEC 42001 standard is the benefit of presumption of conformity under Article 40 of the EU AI Act.
Article 40 of the EU AI Act states that:
High-risk systems of general purpose AI models which are in conformity with relevant harmonised standards or parts thereof shall be presumed to be in conformity with the relevant requirements for those systems as they are articulated in the AI Act.
In conclusion, the ISO/IEC 42001:2023 standard, developed by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC), aims to ensure trustworthy AI by offering a comprehensive AI Risk Management Framework.
Altogether, this sector-agnostic framework provides guidance for organizations to systematically manage and control risks in AI system development and deployment.
It follows a “Plan-Do-Check-Act” approach, encouraging organizations to adopt policies and objectives to minimize potential harms.
By breaking the black box, ISO/IEC 42001:2023 promotes transparency and reliability in AI.
Send us a Message
If you have any questions about our services