AIUC-1 × ISO 42001: reverse mapping

See the mapping of ISO 42001 controls by AIUC-1 requirements here.

Risk Area

Data & Privacy

AIUC-1 Requirement

A001: Establish input data policy

Description

Establish and communicate AI input data policies covering how customer data is used for model training, inference processing, data retention periods, and customer data rights

ISO 42001 controls

A.7.2: Data for development and enhancement of AI system — The organization shall define, document and implement data management processes related to the development of AI systems.

A.7.3: Acquisition of data — The organization shall determine and document details about the acquisition and selection of the data used in AI systems.

Risk Area

Data & Privacy

AIUC-1 Requirement

A002: Establish output data policy

Description

Establish AI output ownership, usage, opt-out and deletion policies to customers and communicate these policies

ISO 42001 controls

Risk Area

Data & Privacy

AIUC-1 Requirement

A003: Limit AI agent data collection

Description

Implement safeguards to limit AI agent data access to task-relevant information based on user roles and context

ISO 42001 controls

Risk Area

Data & Privacy

AIUC-1 Requirement

A004: Protect IP & trade secrets

Description

Implement safeguards or technical controls to prevent AI systems from leaking company intellectual property or confidential information

ISO 42001 controls

Risk Area

Data & Privacy

AIUC-1 Requirement

A005: Prevent cross-customer data exposure

Description

Implement safeguards to prevent cross-customer data exposure when combining customer data from multiple sources

ISO 42001 controls

Risk Area

Data & Privacy

AIUC-1 Requirement

A006: Prevent PII leakage

Description

Establish safeguards to prevent personal data leakage through AI outputs and logs

ISO 42001 controls

Risk Area

Data & Privacy

AIUC-1 Requirement

A007: Prevent IP violations

Description

Implement safeguards and technical controls to prevent AI outputs from violating copyrights, trademarks, or other third-party intellectual property rights

ISO 42001 controls

A.7.5: Data provenance — The organization shall define and document a process for recording the provenance of data used in its AI systems over the life cycles of the data and the AI system.

Risk Area

Security

AIUC-1 Requirement

B001: Third-party testing of adversarial robustness

Description

Implement adversarial testing program to validate system resilience against adversarial inputs and prompt injection attempts in line with adversarial threat taxonomy

ISO 42001 controls

Risk Area

Security

AIUC-1 Requirement

B002: Detect adversarial input

Description

Implement monitoring capabilities to detect and respond to adversarial inputs and prompt injection attempts

ISO 42001 controls

Risk Area

Security

AIUC-1 Requirement

B003: Manage public release of technical details

Description

Implement controls to prevent over-disclosure of technical information about AI systems and organizational details that could enable adversarial targeting

ISO 42001 controls

Risk Area

Security

AIUC-1 Requirement

B004: Prevent AI endpoint scraping

Description

Implement safeguards to prevent probing or scraping of external AI endpoints

ISO 42001 controls

Risk Area

Security

AIUC-1 Requirement

B005: Implement real-time input filtering

Description

Implement real-time input filtering using automated moderation tools

ISO 42001 controls

Risk Area

Security

AIUC-1 Requirement

B006: Prevent unauthorized AI agent actions

Description

Implement safeguards to prevent AI agents from performing actions beyond intended scope and authorized privileges

ISO 42001 controls

Risk Area

Security

AIUC-1 Requirement

B007: Enforce user access privileges to AI systems

Description

Establish and maintain user access controls and admin privileges for AI systems in line with policy

ISO 42001 controls

Risk Area

Security

AIUC-1 Requirement

B008: Protect model deployment environment

Description

Implement security measures for AI model deployment environments including encryption, access controls and authorization

ISO 42001 controls

Risk Area

Security

AIUC-1 Requirement

B009: Limit output over-exposure

Description

Implement output limitations and obfuscation techniques to safeguard against information leakage

ISO 42001 controls

Risk Area

Safety

AIUC-1 Requirement

C001: Define AI risk taxonomy

Description

Establish a risk taxonomy that categorizes risks within harmful, out-of-scope, and hallucinated outputs, tool calls, and other risks based on application-specific usage

ISO 42001 controls

4.1: Understanding the organization and its context — The organization shall determine external and internal issues relevant to the AI management system’s purpose and ability to achieve intended results.

6.1.1: Actions to address risks and opportunities — General — The organization shall plan actions to address risks and opportunities, integrate them into processes, and evaluate their effectiveness.

6.1.2: AI risk assessment — The organization shall establish and maintain a process for AI risk assessment, including identification, analysis, and evaluation of risks.

6.1.3: AI risk treatment — The organization shall establish and maintain a process for AI risk treatment, including selecting and implementing necessary controls.

6.1.4: AI system impact assessment — The organization shall conduct AI system impact assessments covering potential effects on individuals, groups, and society.

8.2: AI risk assessment — The organization shall perform AI risk assessments at planned intervals and when significant changes occur.

8.3: AI risk treatment — The organization shall implement AI risk treatment plans and review them when assessments identify new or ineffective controls.

8.4: AI system impact assessment — The organization shall perform AI system impact assessments at planned intervals and when significant changes are proposed.

A.5.2: AI system impact assessment process — The organization shall establish a process to assess the potential consequences for individuals or groups of individuals, or both, and societies that can result from the AI system throughout its life cycle.

A.5.3: Documentation of AI system impact assessments — The organization shall document the results of AI system impact assessments and retain results for a defined period.

A.5.4: Assessing AI system impact on individuals or groups of individuals — The organization shall assess and document the potential impacts of AI systems to individuals or groups of individuals throughout the system's life cycle.

A.5.5: Assessing societal impacts of AI systems — The organization shall assess and document the potential societal impacts of their AI systems throughout their life cycle.

Risk Area

Safety

AIUC-1 Requirement

C002: Conduct pre-deployment testing

Description

Conduct internal testing of AI systems prior to deployment across risk categories for system changes requiring formal review or approval

ISO 42001 controls

A.6.2.4: AI system verification and validation — The organization shall define and document verification and validation measures for the AI system and specify criteria for their use.

A.6.2.5: AI system deployment — The organization shall document a deployment plan and ensure that appropriate requirements are met prior to deployment.

Risk Area

Safety

AIUC-1 Requirement

C003: Prevent harmful outputs

Description

Implement safeguards or technical controls to prevent harmful outputs including distressed outputs, angry responses, high-risk advice, offensive content, bias, and deception

ISO 42001 controls

Risk Area

Safety

AIUC-1 Requirement

C004: Prevent out-of-scope outputs

Description

Implement safeguards or technical controls to prevent out-of-scope outputs (e.g. political discussion, healthcare advice)

ISO 42001 controls

Risk Area

Safety

AIUC-1 Requirement

C005: Prevent customer-defined high risk outputs

Description

Implement safeguards or technical controls to prevent additional high risk outputs as defined in risk taxonomy

ISO 42001 controls

Risk Area

Safety

AIUC-1 Requirement

C006: Prevent output vulnerabilities

Description

Implement safeguards to prevent security vulnerabilities in outputs from impacting users

ISO 42001 controls

Risk Area

Safety

AIUC-1 Requirement

C007: Flag high risk outputs

Description

Implement an alerting system that flags high-risk outputs for human review

ISO 42001 controls

A.6.1.2: Objectives for responsible development of AI system — The organization shall identify and document objectives to guide the responsible development AI systems, and take those objectives into account and integrate measures to achieve them in the development life cycle.

A.9.2: Processes for responsible use of AI systems — The organization shall define and document the processes for the responsible use of AI systems.

A.9.3: Objectives for responsible use of AI system — The organization shall identify and document objectives to guide the responsible use of AI systems.

Risk Area

Safety

AIUC-1 Requirement

C008: Monitor AI risk categories

Description

Implement monitoring of AI systems across risk categories

ISO 42001 controls

6.1.1: Actions to address risks and opportunities — General — The organization shall plan actions to address risks and opportunities, integrate them into processes, and evaluate their effectiveness.

6.1.2: AI risk assessment — The organization shall establish and maintain a process for AI risk assessment, including identification, analysis, and evaluation of risks.

6.1.3: AI risk treatment — The organization shall establish and maintain a process for AI risk treatment, including selecting and implementing necessary controls.

8.2: AI risk assessment — The organization shall perform AI risk assessments at planned intervals and when significant changes occur.

8.3: AI risk treatment — The organization shall implement AI risk treatment plans and review them when assessments identify new or ineffective controls.

9.1: Monitoring, measurement, analysis and evaluation — The organization shall determine monitoring, measurement, analysis, and evaluation needed to ensure conformity and effectiveness.

A.5.4: Assessing AI system impact on individuals or groups of individuals — The organization shall assess and document the potential impacts of AI systems to individuals or groups of individuals throughout the system's life cycle.

A.6.2.6: AI system operation and monitoring — The organization shall define and document the necessary elements for the ongoing operation of the AI system. At the minimum, this should include system and performance monitoring, repairs, updates and support.

A.9.2: Processes for responsible use of AI systems — The organization shall define and document the processes for the responsible use of AI systems.

A.9.4: Intended use of the AI system — The organization shall ensure that the AI system is used according to the intended uses of the AI system and its accompanying documentation.

Risk Area

Safety

AIUC-1 Requirement

C009: Enable real-time feedback and intervention

Description

Implement mechanisms to enable real-time user feedback collection and intervention mechanisms

ISO 42001 controls

A.8.3: External reporting — The organization shall provide capabilities for interested parties to report adverse impacts of the AI system.

Risk Area

Safety

AIUC-1 Requirement

C010: Third-party testing for harmful outputs

Description

Appoint expert third parties to evaluate system robustness to harmful outputs including distressed outputs, angry responses, high-risk advice, offensive content, bias, and deception at least every 3 months

ISO 42001 controls

A.6.2.4: AI system verification and validation — The organization shall define and document verification and validation measures for the AI system and specify criteria for their use.

Risk Area

Safety

AIUC-1 Requirement

C011: Third-party testing for out-of-scope outputs

Description

Appoint expert third parties to evaluate system robustness to out-of-scope outputs at least every 3 months (e.g. political discussion, healthcare advice)

ISO 42001 controls

A.6.2.4: AI system verification and validation — The organization shall define and document verification and validation measures for the AI system and specify criteria for their use.

Risk Area

Safety

AIUC-1 Requirement

C012: Third-party testing for customer-defined risk

Description

Appoint expert third-parties to evaluate system robustness to additional high-risk outputs as defined in risk taxonomy at least every 3 months

ISO 42001 controls

A.6.2.4: AI system verification and validation — The organization shall define and document verification and validation measures for the AI system and specify criteria for their use.

Risk Area

Reliability

AIUC-1 Requirement

D001: Prevent hallucinated outputs

Description

Implement safeguards or technical controls to prevent hallucinated outputs

ISO 42001 controls

Risk Area

Reliability

AIUC-1 Requirement

D002: Third-party testing for hallucinations

Description

Appoint expert third-parties to evaluate hallucinated outputs at least every 3 months

ISO 42001 controls

A.6.2.4: AI system verification and validation — The organization shall define and document verification and validation measures for the AI system and specify criteria for their use.

Risk Area

Reliability

AIUC-1 Requirement

D003: Restrict unsafe tool calls

Description

Implement safeguards or technical controls to prevent tool calls in AI systems from executing unauthorized actions, accessing restricted information, or making decisions beyond their intended scope

ISO 42001 controls

Risk Area

Reliability

AIUC-1 Requirement

D004: Third-party testing of tool calls

Description

Appoint expert third-parties to evaluate tool calls in AI systems, including executing unauthorized actions, accessing restricted information, or making decisions beyond their intended scope at least every 3 months

ISO 42001 controls

A.6.2.4: AI system verification and validation — The organization shall define and document verification and validation measures for the AI system and specify criteria for their use.

Risk Area

Accountability

AIUC-1 Requirement

E001: AI failure plan for security breaches

Description

Document AI failure plan for AI privacy and security breaches assigning accountable owners and establishing notification and remediation with third-party support as needed (e.g. legal, PR, insurers)

ISO 42001 controls

A.8.4: Communication of incidents — The organization shall determine and document a plan for communicating incidents to users of the AI system.

A.8.5: Information for interested parties — The organization shall determine and document their obligations to reporting information about the AI system to interested parties.

Risk Area

Accountability

AIUC-1 Requirement

E002: AI failure plan for harmful outputs

Description

Document AI failure plan for harmful AI outputs that cause significant customer harm assigning accountable owners and establishing remediation with third-party support as needed (e.g. legal, PR, insurers)

ISO 42001 controls

A.8.4: Communication of incidents — The organization shall determine and document a plan for communicating incidents to users of the AI system.

Risk Area

Accountability

AIUC-1 Requirement

E003: AI failure plan for hallucinations

Description

Document AI failure plan for hallucinated AI outputs that cause substantial customer financial loss assigning accountable owners and establishing remediation with third-party support as needed (e.g. legal, PR, insurers)

ISO 42001 controls

A.8.4: Communication of incidents — The organization shall determine and document a plan for communicating incidents to users of the AI system.

Risk Area

Accountability

AIUC-1 Requirement

E004: Assign accountability

Description

Document which AI system changes across the development & deployment lifecycle require formal review or approval, assign a lead accountable for each, and document their approval with supporting evidence

ISO 42001 controls

5.1: Leadership and commitment — Top management shall demonstrate leadership and commitment to the AI management system and its effectiveness.

5.3: Roles, responsibilities and authorities — Top management shall assign roles, responsibilities, and authorities for the AI management system.

7.2: Competence — The organization shall ensure competence of persons working under its control based on education, training, or experience.

A.10.2: Allocating responsibilities — The organization shall ensure that responsibilities within their AI system life cycle are allocated between the organization, its partners, suppliers, customers and third parties.

A.3.2: AI roles and responsibilities — Roles and responsibilities for AI shall be defined and allocated according to the needs of the organization.

A.4.6: Human resources — As part of resource identification, the organization shall document information about the human resources and their competences utilized for the development, deployment, operation, change management, maintenance, transfer and decommissioning, as well as verification and integration of the AI system.

A.6.2.2: AI system requirements and specification — The organization shall specify and document requirements for new AI systems or material enhancements to existing systems.

Risk Area

Accountability

AIUC-1 Requirement

E005: Assess cloud vs on-prem processing

Description

Establish criteria for selecting cloud provider, and circumstances for on-premises processing considering data sensitivity, regulatory requirements, security controls, and operational needs

ISO 42001 controls

Risk Area

Accountability

AIUC-1 Requirement

E006: Conduct vendor due diligence

Description

Establish AI vendor due diligence processes for foundation and upstream model providers covering data handling, PII controls, security and compliance

ISO 42001 controls

A.10.3: Suppliers — The organization shall establish a process to ensure that its usage of services, products or materials provided by suppliers aligns with the organization's approach to the responsible development and use of AI systems.

Risk Area

Accountability

AIUC-1 Requirement

E007: Document system change approvals

Description

Merged with E004 - see changelog (Q1 2026 update)

ISO 42001 controls

6.3: Planning of changes — The organization shall plan and control changes to the AI management system in a planned manner.

A.6.2.2: AI system requirements and specification — The organization shall specify and document requirements for new AI systems or material enhancements to existing systems.

A.6.2.4: AI system verification and validation — The organization shall define and document verification and validation measures for the AI system and specify criteria for their use.

Risk Area

Accountability

AIUC-1 Requirement

E008: Review internal processes

Description

Establish regular internal reviews of key processes and document review records and approvals

ISO 42001 controls

6.3: Planning of changes — The organization shall plan and control changes to the AI management system in a planned manner.

7.5.2: Creating and updating documented information — The organization shall ensure documented information is properly created, updated, and controlled for suitability and adequacy.

9.2.1: Internal audit - General — The organization shall conduct internal audits at planned intervals to provide information on the AI management system.

9.2.2: Internal audit programme — Top management shall review the AI management system at planned intervals for continuing suitability, adequacy, and effectiveness.

9.3.1: Management review - General — The organization shall review the AI management system at planned intervals to ensure its suitability, adequacy, and effectiveness.

9.3.2: Management review inputs — Management review inputs shall include audits, performance trends, nonconformities, feedback, risks, changes, and resources.

9.3.3: Management review results — Management review results shall include decisions on improvements, policy/objectives, resources, and follow-up actions.

A.2.3: Alignment with other organizational policies — The organization shall determine where other policies can be affected by or apply to the organization's objectives with respect to AI systems.

A.2.4: Review of the AI policy — The AI policy shall be reviewed at planned intervals or additionally as needed to ensure its continuing suitability, adequacy and effectiveness.

A.3.3: Reporting of concerns — The organization shall define and put in place a process to report concerns about the organization's role with respect to an AI system throughout its life cycle.

Risk Area

Accountability

AIUC-1 Requirement

E009: Monitor third-party access

Description

Implement systems to monitor third party access

ISO 42001 controls

Risk Area

Accountability

AIUC-1 Requirement

E010: Establish AI acceptable use policy

Description

Establish and implement an AI acceptable use policy

ISO 42001 controls

4.1: Understanding the organization and its context — The organization shall determine external and internal issues relevant to the AI management system’s purpose and ability to achieve intended results.

4.3: Determining the scope of the AI management system — The organization shall define and document the scope of the AI management system, including applicability and boundaries.

5.2: AI policy — Top management shall establish an AI policy appropriate to the organization and supportive of AI objectives.

A.2.2: AI policy — The organization shall document a policy for the development or use of AI systems.

A.2.4: Review of the AI policy — The AI policy shall be reviewed at planned intervals or additionally as needed to ensure its continuing suitability, adequacy and effectiveness.

A.9.2: Processes for responsible use of AI systems — The organization shall define and document the processes for the responsible use of AI systems.

A.9.3: Objectives for responsible use of AI system — The organization shall identify and document objectives to guide the responsible use of AI systems.

A.9.4: Intended use of the AI system — The organization shall ensure that the AI system is used according to the intended uses of the AI system and its accompanying documentation.

Risk Area

Accountability

AIUC-1 Requirement

E011: Record processing locations

Description

Document AI data processing locations

ISO 42001 controls

A.7.5: Data provenance — The organization shall define and document a process for recording the provenance of data used in its AI systems over the life cycles of the data and the AI system.

Risk Area

Accountability

AIUC-1 Requirement

E012: Document regulatory compliance

Description

Document applicable AI laws and standards, required data protections, and strategies for compliance

ISO 42001 controls

10.2: Nonconformity and corrective action — The organization shall address nonconformities by correcting them, dealing with consequences, and preventing recurrence.

A.2.3: Alignment with other organizational policies — The organization shall determine where other policies can be affected by or apply to the organization's objectives with respect to AI systems.

A.8.5: Information for interested parties — The organization shall determine and document their obligations to reporting information about the AI system to interested parties.

Risk Area

Accountability

AIUC-1 Requirement

E013: Implement quality management system

Description

Establish a quality management system for AI systems proportionate to the size of the organization

ISO 42001 controls

10.1: Continual improvement — The organization shall continually improve the AI management system’s suitability, adequacy, and effectiveness.

10.2: Nonconformity and corrective action — The organization shall address nonconformities by correcting them, dealing with consequences, and preventing recurrence.

4.4: AI management system — The organization shall establish, implement, maintain, and continually improve the AI management system.

6.1.4: AI system impact assessment — The organization shall conduct AI system impact assessments covering potential effects on individuals, groups, and society.

7.1: Resources — The organization shall determine and provide necessary resources for the AI management system.

7.5.1: Documented information — General — The organization shall document information required by the AI management system and by ISO42001.

8.1: Operational planning and control — The organization shall plan, implement, and control processes needed for the AI management system, ensuring outputs meet requirements.

8.4: AI system impact assessment — The organization shall perform AI system impact assessments at planned intervals and when significant changes are proposed.

9.1: Monitoring, measurement, analysis and evaluation — The organization shall determine monitoring, measurement, analysis, and evaluation needed to ensure conformity and effectiveness.

A.4.2: Resource documentation — The organization shall identify and document relevant resources required for all activities at given AI system life cycle stages and other AI-related activities relevant for the organization.

A.5.2: AI system impact assessment process — The organization shall establish a process to assess the potential consequences for individuals or groups of individuals, or both, and societies that can result from the AI system throughout its life cycle.

A.5.3: Documentation of AI system impact assessments — The organization shall document the results of AI system impact assessments and retain results for a defined period.

A.5.4: Assessing AI system impact on individuals or groups of individuals — The organization shall assess and document the potential impacts of AI systems to individuals or groups of individuals throughout the system's life cycle.

A.6.2.3: Documentation of AI system design and development — The organization shall document the AI system design and development based on organizational objectives, documented requirements and specification criteria.

A.6.2.7: AI system technical documentation — The organization shall determine what AI system technical documentation is needed for each relevant category of interested parties, such as users, partners, supervisory authorities, and provide the technical documentation to them in the appropriate form.

Risk Area

Accountability

AIUC-1 Requirement

E014: Share transparency reports

Description

Merged with E017 - see changelog (Q1 2026 update)

ISO 42001 controls

7.4: Communication — The organization shall determine internal and external communications relevant to the AI management system.

A.6.2.7: AI system technical documentation — The organization shall determine what AI system technical documentation is needed for each relevant category of interested parties, such as users, partners, supervisory authorities, and provide the technical documentation to them in the appropriate form.

A.8.2: System documentation and information for users — The organization shall determine and provide the necessary information to users of the AI system.

A.8.5: Information for interested parties — The organization shall determine and document their obligations to reporting information about the AI system to interested parties.

Risk Area

Accountability

AIUC-1 Requirement

E015: Log model activity

Description

Maintain logs of AI system processes, actions, and model outputs where permitted to support incident investigation, auditing, and explanation of AI system behavior

ISO 42001 controls

A.6.2.8: AI system recording of event logs — The organization shall determine at which phases of the AI system life cycle, record keeping of event logs should be enabled, but at the minimum when the AI system is in use.

Risk Area

Accountability

AIUC-1 Requirement

E016: Implement AI disclosure mechanisms

Description

Implement clear disclosure mechanisms to inform users when they are interacting with AI systems rather than humans

ISO 42001 controls

A.8.2: System documentation and information for users — The organization shall determine and provide the necessary information to users of the AI system.

Risk Area

Accountability

AIUC-1 Requirement

E017: Document system transparency policy

Description

Establish a system transparency policy and maintain a repository of model cards, datasheets, and interpretability reports for major systems

ISO 42001 controls

4.3: Determining the scope of the AI management system — The organization shall define and document the scope of the AI management system, including applicability and boundaries.

5.2: AI policy — Top management shall establish an AI policy appropriate to the organization and supportive of AI objectives.

A.2.2: AI policy — The organization shall document a policy for the development or use of AI systems.

A.2.4: Review of the AI policy — The AI policy shall be reviewed at planned intervals or additionally as needed to ensure its continuing suitability, adequacy and effectiveness.

A.4.2: Resource documentation — The organization shall identify and document relevant resources required for all activities at given AI system life cycle stages and other AI-related activities relevant for the organization.

A.4.3: Data resources — As part of resource identification, the organization shall document information about the data resources utilized for the AI system.

A.4.4: Tooling resources — As part of resource identification, the organization shall document information about the tooling resources utilized for the AI system.

A.4.5: System and computing resources — As part of resource identification, the organization shall document information about the system and computing resources utilized for the AI system.

A.6.2.3: Documentation of AI system design and development — The organization shall document the AI system design and development based on organizational objectives, documented requirements and specification criteria.

Risk Area

Society

AIUC-1 Requirement

F001: Prevent AI cyber misuse

Description

Implement or document guardrails to prevent AI-enabled misuse for cyber attacks and exploitation

ISO 42001 controls

A.5.5: Assessing societal impacts of AI systems — The organization shall assess and document the potential societal impacts of their AI systems throughout their life cycle.

Risk Area

Society

AIUC-1 Requirement

F002: Prevent catastrophic misuse

Description

Implement or document guardrails to prevent AI-enabled catastrophic system misuse (chemical / bio / radio / nuclear)

ISO 42001 controls

A.5.5: Assessing societal impacts of AI systems — The organization shall assess and document the potential societal impacts of their AI systems throughout their life cycle.

Last updated March 20, 2026.