See the mapping of ISO 42001 controls by AIUC-1 requirements here.
Data & Privacy
A001: Establish input data policy
Establish and communicate AI input data policies covering how customer data is used for model training, inference processing, data retention periods, and customer data rights
A.7.2: Data for development and enhancement of AI system — The organization shall define, document and implement data management processes related to the development of AI systems.
A.7.3: Acquisition of data — The organization shall determine and document details about the acquisition and selection of the data used in AI systems.
Data & Privacy
A002: Establish output data policy
Establish AI output ownership, usage, opt-out and deletion policies to customers and communicate these policies
—
Data & Privacy
A003: Limit AI agent data collection
Implement safeguards to limit AI agent data access to task-relevant information based on user roles and context
—
Data & Privacy
A004: Protect IP & trade secrets
Implement safeguards or technical controls to prevent AI systems from leaking company intellectual property or confidential information
—
Data & Privacy
A005: Prevent cross-customer data exposure
Implement safeguards to prevent cross-customer data exposure when combining customer data from multiple sources
—
Data & Privacy
A006: Prevent PII leakage
Establish safeguards to prevent personal data leakage through AI outputs and logs
—
Data & Privacy
A007: Prevent IP violations
Implement safeguards and technical controls to prevent AI outputs from violating copyrights, trademarks, or other third-party intellectual property rights
A.7.5: Data provenance — The organization shall define and document a process for recording the provenance of data used in its AI systems over the life cycles of the data and the AI system.
Security
B001: Third-party testing of adversarial robustness
Implement adversarial testing program to validate system resilience against adversarial inputs and prompt injection attempts in line with adversarial threat taxonomy
—
Security
B002: Detect adversarial input
Implement monitoring capabilities to detect and respond to adversarial inputs and prompt injection attempts
—
Security
B003: Manage public release of technical details
Implement controls to prevent over-disclosure of technical information about AI systems and organizational details that could enable adversarial targeting
—
Security
B004: Prevent AI endpoint scraping
Implement safeguards to prevent probing or scraping of external AI endpoints
—
Security
B005: Implement real-time input filtering
Implement real-time input filtering using automated moderation tools
—
Security
B006: Prevent unauthorized AI agent actions
Implement safeguards to prevent AI agents from performing actions beyond intended scope and authorized privileges
—
Security
B007: Enforce user access privileges to AI systems
Establish and maintain user access controls and admin privileges for AI systems in line with policy
—
Security
B008: Protect model deployment environment
Implement security measures for AI model deployment environments including encryption, access controls and authorization
—
Security
B009: Limit output over-exposure
Implement output limitations and obfuscation techniques to safeguard against information leakage
—
Safety
C001: Define AI risk taxonomy
Establish a risk taxonomy that categorizes risks within harmful, out-of-scope, and hallucinated outputs, tool calls, and other risks based on application-specific usage
4.1: Understanding the organization and its context — The organization shall determine external and internal issues relevant to the AI management system’s purpose and ability to achieve intended results.
6.1.1: Actions to address risks and opportunities — General — The organization shall plan actions to address risks and opportunities, integrate them into processes, and evaluate their effectiveness.
6.1.2: AI risk assessment — The organization shall establish and maintain a process for AI risk assessment, including identification, analysis, and evaluation of risks.
6.1.3: AI risk treatment — The organization shall establish and maintain a process for AI risk treatment, including selecting and implementing necessary controls.
6.1.4: AI system impact assessment — The organization shall conduct AI system impact assessments covering potential effects on individuals, groups, and society.
8.2: AI risk assessment — The organization shall perform AI risk assessments at planned intervals and when significant changes occur.
8.3: AI risk treatment — The organization shall implement AI risk treatment plans and review them when assessments identify new or ineffective controls.
8.4: AI system impact assessment — The organization shall perform AI system impact assessments at planned intervals and when significant changes are proposed.
A.5.2: AI system impact assessment process — The organization shall establish a process to assess the potential consequences for individuals or groups of individuals, or both, and societies that can result from the AI system throughout its life cycle.
A.5.3: Documentation of AI system impact assessments — The organization shall document the results of AI system impact assessments and retain results for a defined period.
A.5.4: Assessing AI system impact on individuals or groups of individuals — The organization shall assess and document the potential impacts of AI systems to individuals or groups of individuals throughout the system's life cycle.
A.5.5: Assessing societal impacts of AI systems — The organization shall assess and document the potential societal impacts of their AI systems throughout their life cycle.
Safety
C002: Conduct pre-deployment testing
Conduct internal testing of AI systems prior to deployment across risk categories for system changes requiring formal review or approval
A.6.2.4: AI system verification and validation — The organization shall define and document verification and validation measures for the AI system and specify criteria for their use.
A.6.2.5: AI system deployment — The organization shall document a deployment plan and ensure that appropriate requirements are met prior to deployment.
Safety
C003: Prevent harmful outputs
Implement safeguards or technical controls to prevent harmful outputs including distressed outputs, angry responses, high-risk advice, offensive content, bias, and deception
—
Safety
C004: Prevent out-of-scope outputs
Implement safeguards or technical controls to prevent out-of-scope outputs (e.g. political discussion, healthcare advice)
—
Safety
C005: Prevent customer-defined high risk outputs
Implement safeguards or technical controls to prevent additional high risk outputs as defined in risk taxonomy
—
Safety
C006: Prevent output vulnerabilities
Implement safeguards to prevent security vulnerabilities in outputs from impacting users
—
Safety
C007: Flag high risk outputs
Implement an alerting system that flags high-risk outputs for human review
A.6.1.2: Objectives for responsible development of AI system — The organization shall identify and document objectives to guide the responsible development AI systems, and take those objectives into account and integrate measures to achieve them in the development life cycle.
A.9.2: Processes for responsible use of AI systems — The organization shall define and document the processes for the responsible use of AI systems.
A.9.3: Objectives for responsible use of AI system — The organization shall identify and document objectives to guide the responsible use of AI systems.
Safety
C008: Monitor AI risk categories
Implement monitoring of AI systems across risk categories
6.1.1: Actions to address risks and opportunities — General — The organization shall plan actions to address risks and opportunities, integrate them into processes, and evaluate their effectiveness.
6.1.2: AI risk assessment — The organization shall establish and maintain a process for AI risk assessment, including identification, analysis, and evaluation of risks.
6.1.3: AI risk treatment — The organization shall establish and maintain a process for AI risk treatment, including selecting and implementing necessary controls.
8.2: AI risk assessment — The organization shall perform AI risk assessments at planned intervals and when significant changes occur.
8.3: AI risk treatment — The organization shall implement AI risk treatment plans and review them when assessments identify new or ineffective controls.
9.1: Monitoring, measurement, analysis and evaluation — The organization shall determine monitoring, measurement, analysis, and evaluation needed to ensure conformity and effectiveness.
A.5.4: Assessing AI system impact on individuals or groups of individuals — The organization shall assess and document the potential impacts of AI systems to individuals or groups of individuals throughout the system's life cycle.
A.6.2.6: AI system operation and monitoring — The organization shall define and document the necessary elements for the ongoing operation of the AI system. At the minimum, this should include system and performance monitoring, repairs, updates and support.
A.9.2: Processes for responsible use of AI systems — The organization shall define and document the processes for the responsible use of AI systems.
A.9.4: Intended use of the AI system — The organization shall ensure that the AI system is used according to the intended uses of the AI system and its accompanying documentation.
Safety
C009: Enable real-time feedback and intervention
Implement mechanisms to enable real-time user feedback collection and intervention mechanisms
A.8.3: External reporting — The organization shall provide capabilities for interested parties to report adverse impacts of the AI system.
Safety
C010: Third-party testing for harmful outputs
Appoint expert third parties to evaluate system robustness to harmful outputs including distressed outputs, angry responses, high-risk advice, offensive content, bias, and deception at least every 3 months
A.6.2.4: AI system verification and validation — The organization shall define and document verification and validation measures for the AI system and specify criteria for their use.
Safety
C011: Third-party testing for out-of-scope outputs
Appoint expert third parties to evaluate system robustness to out-of-scope outputs at least every 3 months (e.g. political discussion, healthcare advice)
A.6.2.4: AI system verification and validation — The organization shall define and document verification and validation measures for the AI system and specify criteria for their use.
Safety
C012: Third-party testing for customer-defined risk
Appoint expert third-parties to evaluate system robustness to additional high-risk outputs as defined in risk taxonomy at least every 3 months
A.6.2.4: AI system verification and validation — The organization shall define and document verification and validation measures for the AI system and specify criteria for their use.
Reliability
D001: Prevent hallucinated outputs
Implement safeguards or technical controls to prevent hallucinated outputs
—
Reliability
D002: Third-party testing for hallucinations
Appoint expert third-parties to evaluate hallucinated outputs at least every 3 months
A.6.2.4: AI system verification and validation — The organization shall define and document verification and validation measures for the AI system and specify criteria for their use.
Reliability
D003: Restrict unsafe tool calls
Implement safeguards or technical controls to prevent tool calls in AI systems from executing unauthorized actions, accessing restricted information, or making decisions beyond their intended scope
—
Reliability
D004: Third-party testing of tool calls
Appoint expert third-parties to evaluate tool calls in AI systems, including executing unauthorized actions, accessing restricted information, or making decisions beyond their intended scope at least every 3 months
A.6.2.4: AI system verification and validation — The organization shall define and document verification and validation measures for the AI system and specify criteria for their use.
Accountability
E001: AI failure plan for security breaches
Document AI failure plan for AI privacy and security breaches assigning accountable owners and establishing notification and remediation with third-party support as needed (e.g. legal, PR, insurers)
A.8.4: Communication of incidents — The organization shall determine and document a plan for communicating incidents to users of the AI system.
A.8.5: Information for interested parties — The organization shall determine and document their obligations to reporting information about the AI system to interested parties.
Accountability
E002: AI failure plan for harmful outputs
Document AI failure plan for harmful AI outputs that cause significant customer harm assigning accountable owners and establishing remediation with third-party support as needed (e.g. legal, PR, insurers)
A.8.4: Communication of incidents — The organization shall determine and document a plan for communicating incidents to users of the AI system.
Accountability
E003: AI failure plan for hallucinations
Document AI failure plan for hallucinated AI outputs that cause substantial customer financial loss assigning accountable owners and establishing remediation with third-party support as needed (e.g. legal, PR, insurers)
A.8.4: Communication of incidents — The organization shall determine and document a plan for communicating incidents to users of the AI system.
Accountability
E004: Assign accountability
Document which AI system changes across the development & deployment lifecycle require formal review or approval, assign a lead accountable for each, and document their approval with supporting evidence
5.1: Leadership and commitment — Top management shall demonstrate leadership and commitment to the AI management system and its effectiveness.
5.3: Roles, responsibilities and authorities — Top management shall assign roles, responsibilities, and authorities for the AI management system.
7.2: Competence — The organization shall ensure competence of persons working under its control based on education, training, or experience.
A.10.2: Allocating responsibilities — The organization shall ensure that responsibilities within their AI system life cycle are allocated between the organization, its partners, suppliers, customers and third parties.
A.3.2: AI roles and responsibilities — Roles and responsibilities for AI shall be defined and allocated according to the needs of the organization.
A.4.6: Human resources — As part of resource identification, the organization shall document information about the human resources and their competences utilized for the development, deployment, operation, change management, maintenance, transfer and decommissioning, as well as verification and integration of the AI system.
A.6.2.2: AI system requirements and specification — The organization shall specify and document requirements for new AI systems or material enhancements to existing systems.
Accountability
E005: Assess cloud vs on-prem processing
Establish criteria for selecting cloud provider, and circumstances for on-premises processing considering data sensitivity, regulatory requirements, security controls, and operational needs
—
Accountability
E006: Conduct vendor due diligence
Establish AI vendor due diligence processes for foundation and upstream model providers covering data handling, PII controls, security and compliance
A.10.3: Suppliers — The organization shall establish a process to ensure that its usage of services, products or materials provided by suppliers aligns with the organization's approach to the responsible development and use of AI systems.
Accountability
E007: Document system change approvals
Merged with E004 - see changelog (Q1 2026 update)
6.3: Planning of changes — The organization shall plan and control changes to the AI management system in a planned manner.
A.6.2.2: AI system requirements and specification — The organization shall specify and document requirements for new AI systems or material enhancements to existing systems.
A.6.2.4: AI system verification and validation — The organization shall define and document verification and validation measures for the AI system and specify criteria for their use.
Accountability
E008: Review internal processes
Establish regular internal reviews of key processes and document review records and approvals
6.3: Planning of changes — The organization shall plan and control changes to the AI management system in a planned manner.
7.5.2: Creating and updating documented information — The organization shall ensure documented information is properly created, updated, and controlled for suitability and adequacy.
9.2.1: Internal audit - General — The organization shall conduct internal audits at planned intervals to provide information on the AI management system.
9.2.2: Internal audit programme — Top management shall review the AI management system at planned intervals for continuing suitability, adequacy, and effectiveness.
9.3.1: Management review - General — The organization shall review the AI management system at planned intervals to ensure its suitability, adequacy, and effectiveness.
9.3.2: Management review inputs — Management review inputs shall include audits, performance trends, nonconformities, feedback, risks, changes, and resources.
9.3.3: Management review results — Management review results shall include decisions on improvements, policy/objectives, resources, and follow-up actions.
A.2.3: Alignment with other organizational policies — The organization shall determine where other policies can be affected by or apply to the organization's objectives with respect to AI systems.
A.2.4: Review of the AI policy — The AI policy shall be reviewed at planned intervals or additionally as needed to ensure its continuing suitability, adequacy and effectiveness.
A.3.3: Reporting of concerns — The organization shall define and put in place a process to report concerns about the organization's role with respect to an AI system throughout its life cycle.
Accountability
E009: Monitor third-party access
Implement systems to monitor third party access
—
Accountability
E010: Establish AI acceptable use policy
Establish and implement an AI acceptable use policy
4.1: Understanding the organization and its context — The organization shall determine external and internal issues relevant to the AI management system’s purpose and ability to achieve intended results.
4.3: Determining the scope of the AI management system — The organization shall define and document the scope of the AI management system, including applicability and boundaries.
5.2: AI policy — Top management shall establish an AI policy appropriate to the organization and supportive of AI objectives.
A.2.2: AI policy — The organization shall document a policy for the development or use of AI systems.
A.2.4: Review of the AI policy — The AI policy shall be reviewed at planned intervals or additionally as needed to ensure its continuing suitability, adequacy and effectiveness.
A.9.2: Processes for responsible use of AI systems — The organization shall define and document the processes for the responsible use of AI systems.
A.9.3: Objectives for responsible use of AI system — The organization shall identify and document objectives to guide the responsible use of AI systems.
A.9.4: Intended use of the AI system — The organization shall ensure that the AI system is used according to the intended uses of the AI system and its accompanying documentation.
Accountability
E011: Record processing locations
Document AI data processing locations
A.7.5: Data provenance — The organization shall define and document a process for recording the provenance of data used in its AI systems over the life cycles of the data and the AI system.
Accountability
E012: Document regulatory compliance
Document applicable AI laws and standards, required data protections, and strategies for compliance
10.2: Nonconformity and corrective action — The organization shall address nonconformities by correcting them, dealing with consequences, and preventing recurrence.
A.2.3: Alignment with other organizational policies — The organization shall determine where other policies can be affected by or apply to the organization's objectives with respect to AI systems.
A.8.5: Information for interested parties — The organization shall determine and document their obligations to reporting information about the AI system to interested parties.
Accountability
E013: Implement quality management system
Establish a quality management system for AI systems proportionate to the size of the organization
10.1: Continual improvement — The organization shall continually improve the AI management system’s suitability, adequacy, and effectiveness.
10.2: Nonconformity and corrective action — The organization shall address nonconformities by correcting them, dealing with consequences, and preventing recurrence.
4.4: AI management system — The organization shall establish, implement, maintain, and continually improve the AI management system.
6.1.4: AI system impact assessment — The organization shall conduct AI system impact assessments covering potential effects on individuals, groups, and society.
7.1: Resources — The organization shall determine and provide necessary resources for the AI management system.
7.5.1: Documented information — General — The organization shall document information required by the AI management system and by ISO42001.
8.1: Operational planning and control — The organization shall plan, implement, and control processes needed for the AI management system, ensuring outputs meet requirements.
8.4: AI system impact assessment — The organization shall perform AI system impact assessments at planned intervals and when significant changes are proposed.
9.1: Monitoring, measurement, analysis and evaluation — The organization shall determine monitoring, measurement, analysis, and evaluation needed to ensure conformity and effectiveness.
A.4.2: Resource documentation — The organization shall identify and document relevant resources required for all activities at given AI system life cycle stages and other AI-related activities relevant for the organization.
A.5.2: AI system impact assessment process — The organization shall establish a process to assess the potential consequences for individuals or groups of individuals, or both, and societies that can result from the AI system throughout its life cycle.
A.5.3: Documentation of AI system impact assessments — The organization shall document the results of AI system impact assessments and retain results for a defined period.
A.5.4: Assessing AI system impact on individuals or groups of individuals — The organization shall assess and document the potential impacts of AI systems to individuals or groups of individuals throughout the system's life cycle.
A.6.2.3: Documentation of AI system design and development — The organization shall document the AI system design and development based on organizational objectives, documented requirements and specification criteria.
A.6.2.7: AI system technical documentation — The organization shall determine what AI system technical documentation is needed for each relevant category of interested parties, such as users, partners, supervisory authorities, and provide the technical documentation to them in the appropriate form.
Accountability
E014: Share transparency reports
Merged with E017 - see changelog (Q1 2026 update)
7.4: Communication — The organization shall determine internal and external communications relevant to the AI management system.
A.6.2.7: AI system technical documentation — The organization shall determine what AI system technical documentation is needed for each relevant category of interested parties, such as users, partners, supervisory authorities, and provide the technical documentation to them in the appropriate form.
A.8.2: System documentation and information for users — The organization shall determine and provide the necessary information to users of the AI system.
A.8.5: Information for interested parties — The organization shall determine and document their obligations to reporting information about the AI system to interested parties.
Accountability
E015: Log model activity
Maintain logs of AI system processes, actions, and model outputs where permitted to support incident investigation, auditing, and explanation of AI system behavior
A.6.2.8: AI system recording of event logs — The organization shall determine at which phases of the AI system life cycle, record keeping of event logs should be enabled, but at the minimum when the AI system is in use.
Accountability
E016: Implement AI disclosure mechanisms
Implement clear disclosure mechanisms to inform users when they are interacting with AI systems rather than humans
A.8.2: System documentation and information for users — The organization shall determine and provide the necessary information to users of the AI system.
Accountability
E017: Document system transparency policy
Establish a system transparency policy and maintain a repository of model cards, datasheets, and interpretability reports for major systems
4.3: Determining the scope of the AI management system — The organization shall define and document the scope of the AI management system, including applicability and boundaries.
5.2: AI policy — Top management shall establish an AI policy appropriate to the organization and supportive of AI objectives.
A.2.2: AI policy — The organization shall document a policy for the development or use of AI systems.
A.2.4: Review of the AI policy — The AI policy shall be reviewed at planned intervals or additionally as needed to ensure its continuing suitability, adequacy and effectiveness.
A.4.2: Resource documentation — The organization shall identify and document relevant resources required for all activities at given AI system life cycle stages and other AI-related activities relevant for the organization.
A.4.3: Data resources — As part of resource identification, the organization shall document information about the data resources utilized for the AI system.
A.4.4: Tooling resources — As part of resource identification, the organization shall document information about the tooling resources utilized for the AI system.
A.4.5: System and computing resources — As part of resource identification, the organization shall document information about the system and computing resources utilized for the AI system.
A.6.2.3: Documentation of AI system design and development — The organization shall document the AI system design and development based on organizational objectives, documented requirements and specification criteria.
Society
F001: Prevent AI cyber misuse
Implement or document guardrails to prevent AI-enabled misuse for cyber attacks and exploitation
A.5.5: Assessing societal impacts of AI systems — The organization shall assess and document the potential societal impacts of their AI systems throughout their life cycle.
Society
F002: Prevent catastrophic misuse
Implement or document guardrails to prevent AI-enabled catastrophic system misuse (chemical / bio / radio / nuclear)
A.5.5: Assessing societal impacts of AI systems — The organization shall assess and document the potential societal impacts of their AI systems throughout their life cycle.