Skip to main content
ISO/IEC 42001 AI Management System

ISO/IEC 42001 — AI Management System

The world's first international standard for AI Management Systems (AIMS). Govern AI risk, bias, transparency, and human oversight across your AI-powered products and services — with a clear path to EU AI Act compliance. Available on the Premium plan.

The first international standard
for AI governance

ISO/IEC 42001:2023 is the world's first international standard for AI Management Systems. Published in December 2023, it provides organisations with a systematic framework to develop, deploy, monitor, and continuously improve AI systems responsibly — addressing AI-specific risks including bias, explainability, data quality, and human oversight.

Aligned with familiar management system structures (ISO 27001, ISO 9001), ISO 42001 follows the same Plan-Do-Check-Act cycle — making it straightforward for organisations already certified under ISO 27001 to extend their ISMS to cover AI governance.

ISO 42001 is increasingly cited as a pathway to EU AI Act compliance, particularly for organisations deploying high-risk AI systems that require documented risk management, transparency, and human oversight mechanisms.

2023
Published
10
AIMS clauses
9
Annex A control areas
AIMS
Management system type

Where does your AI fall
on the EU AI Act risk scale?

The EU AI Act classifies AI systems into four risk tiers. ISO 42001 provides the governance framework to demonstrate compliance — especially for Limited and High Risk systems.

Unacceptable Risk

AI systems banned under EU AI Act — social scoring by governments, real-time biometric surveillance in public spaces, exploitation of vulnerabilities.

Examples: Social credit systems, subliminal manipulation AI
High Risk

AI in critical infrastructure, hiring, credit scoring, education, justice, and healthcare. Subject to mandatory conformity assessment, registration, and human oversight.

Examples: CV screening, credit risk AI, medical diagnostics
Limited Risk

AI systems with specific transparency obligations — chatbots must disclose they are AI, deepfakes must be labelled. ISO 42001 helps document compliance.

Examples: Customer service chatbots, AI-generated content
Minimal Risk

Most AI applications — spam filters, recommendation systems, AI in games. No mandatory requirements under EU AI Act but ISO 42001 provides governance best practices.

Examples: Spam filters, AI-assisted search, recommender systems

ISO 42001 clauses — what your AIMS must cover

ISO 42001 follows the ISO High Level Structure (Annex SL) — the same framework used by ISO 27001. If you already have an ISMS, extending it to cover AI governance is straightforward.

Clause 4

Context of the Organisation

Understand internal/external context for AI deployment, identify stakeholders, and define the AIMS scope covering all AI systems in use.

Clause 5

Leadership

Top management commitment to responsible AI, AI policy, roles and responsibilities for AI governance, and accountability structures.

Clause 6

Planning

AI risk assessment methodology, AI-specific risk treatment, and setting objectives for responsible AI management across the organisation.

Clause 7

Support

Resources for AIMS, AI competence requirements, awareness training, internal/external communication, and documented information requirements.

Clause 8

Operation

AI risk assessment and treatment in practice, controls from Annex A, AI system impact assessments, and operational AI governance processes.

Clause 9

Performance Evaluation

Monitoring and measurement of AI management effectiveness, internal audits of the AIMS, and management reviews of AI governance.

Clause 10

Improvement

Continual improvement of the AIMS, nonconformity management, corrective actions for AI incidents and governance failures.

ISO 42001 Annex A — AI-specific control areas

Annex A provides 9 control areas specific to AI governance — covering AI policy, data quality, impact assessment, transparency, human oversight, and supply chain risk.

A.2

Policies for AI in Organisations

Establish AI-specific policies covering responsible AI use, ethical principles, and governance commitments.

A.3

Internal Organisation

Define AI roles, responsibilities, and governance structures — including AI owner, AI operator, and AI subject roles.

A.4

Resources for AI Systems

Ensure adequate resources — data, compute, expertise — for responsible development and deployment of AI systems.

A.5

Assessing Impacts of AI Systems

Conduct AI impact assessments (including bias, safety, privacy, and societal impacts) before and during deployment.

A.6

AI System Life Cycle

Security and governance controls across the full AI system lifecycle: design, development, testing, deployment, monitoring, and decommissioning.

A.7

Data for AI Systems

Data quality, provenance, bias assessment, data minimisation, and privacy controls specifically for AI training and inference data.

A.8

Information for Interested Parties

Transparency obligations — documenting AI system capabilities, limitations, and intended use for users and affected parties.

A.9

Use of AI Systems

Human oversight requirements, user responsibilities, acceptable use policies, and controls preventing misuse of AI systems.

A.10

Third-Party and Customer Relationships

Due diligence for AI systems procured from third parties, supply chain risk, and contractual obligations for responsible AI.

ISO 42001 certification
in 5 phases

For organisations already ISO 27001 certified, ISO 42001 typically adds 4–6 months. For new entrants, plan for 6–12 months end-to-end.

1

Gap Assessment

2–4 weeks

Inventory all AI systems in use, map against ISO 42001 requirements, identify governance gaps.

2

AI Policy & Governance

4–6 weeks

Establish AI policy, assign AIMS roles, define AI risk assessment methodology.

3

Risk Assessment

4–8 weeks

Conduct AI impact assessments for each system, prioritise high-risk AI applications, design treatment plan.

4

Implement Controls

8–16 weeks

Implement Annex A controls — data governance, transparency documentation, human oversight mechanisms.

5

Audit & Certification

4–8 weeks

Internal audit of AIMS effectiveness, management review, third-party certification audit.

Who needs ISO 42001?

Any organisation that develops, deploys, or procures AI systems — technology companies, financial services, healthcare providers, public sector agencies, and any enterprise using AI in critical business decisions. Particularly relevant for organisations subject to the EU AI Act.

AI/ML Product CompaniesFinancial ServicesHealthcarePublic SectorLegalRetail & E-commerceEnterprise AI Users

Multi-Framework Support

11 Compliance Frameworks Supported

From the minimum viable security baseline to enterprise-grade standards — coverage for every compliance requirement.

Community (Free) Premium Ultimate

Start governing AI with ISO/IEC 42001 in Unicis

Implement responsible AI management controls, conduct AI impact assessments, and demonstrate EU AI Act readiness with automated GAP analysis and cross-framework mapping. Available on the Premium plan.