AIUC-1
Operational Evidence

Operational Evidence

Operational Evidence

Download evidence list
Requirement
Control activity
Aligning adversarial testing with broader security testing programs. For example, integrating AI-specific test cases into broader penetration testing, sharing threat models across red/blue teams, aligning test cycles with security audit and compliance calendars.
Evidence
B001.2 Documentation: Security program integration
Penetration test reports with AI-specific test cases, shared threat models, and testing calendars, or documentation of broader security program incorporating AI adversarial testing requirements.
Tags
Supplemental Control
Operational Practices
Engineering PracticeInternal processes
Requirement
Control activity
Documenting limitations on technical information release. For example, limiting public disclosure of model architectures, algorithms, training data details, system configurations, and performance metrics, requiring approval before sharing technical specifications or implementation details.
Controlling organizational information to balance transparency with security. For example, limiting disclosure of AI team details, development timelines, and other information that could reveal technical capabilities, reviewing public communications for sensitive information.
Evidence
B003.1 Documentation: Technical information disclosure guidelines
Policy document, SOP, or handbook section defining limitations and approval requirements for publicly sharing AI system technical details - may include communication policy limiting disclosure of model architectures or configurations, engineering handbook with approval workflows for technical specifications, or internal procedures controlling release of organizational AI information.
Tags
Mandatory Control
Operational Practices
Internal policies
Requirement
Control activity
Establishing approval processes. For example, requiring designated review for public content referencing AI capabilities in e.g. publications, presentations, and marketing materials, and documenting approved disclosures with business justification.
Evidence
B003.2 Documentation: Public disclosure approval records
Approval email, ticket, or review documentation for public AI communications - may include approval requests in email or Jira/Slack for blog posts or press releases, marketing review records for AI capability disclosures, or periodic security review logs for public-facing AI content.
Tags
Supplemental Control
Operational Practices
Internal processes
Requirement
Control activity
Conducting access reviews and updates at least quarterly. For example, validating access assignments, updating based on policy or role changes, documenting access changes with AI-specific context (e.g. model access justification, changes to agent capability boundaries, or access to sensitive prompt/response history).
Evidence
B007.2 Documentation: Access reviews
Quarterly access review documentation - may include access review meeting notes, tracking records of access changes with justifications, or reports documenting role changes and access modifications based on policy updates.
Tags
Mandatory Control
Operational Practices
Internal processes
Requirement
·
Mandatory Requirement
Control activity
Providing user-facing notices or documentation about output limitations.
Evidence
B009.2 Demonstration: User output notices
Screenshot of product interface showing user notices about output limitations - may include messages indicating truncated or suppressed outputs for security or privacy reasons, user documentation explaining limitation policies, or help articles describing output restrictions.
Tags
Supplemental Control
Operational Practices
Product
Requirement
·
Mandatory Requirement
Control activity
Defining risk categories with severity levels and examples based on industry and deployment context. For example, classifying harmful outputs such as distressed outputs, angry responses, high-risk advice, offensive content, bias, and deception, identifying other high-risk use cases such as safety-critical instructions, legal recommendations, financial advice.
Aligning risk taxonomy with external frameworks and standards.
Establishing severity grading appropriate to organizational context and risk tolerance. For example, implementing consistent scoring methodology across risk categories, defining thresholds for flagging and human review.
Evidence
C001.1 Documentation: AI risk taxonomy
Internal policy document, risk framework, or taxonomy defining AI risk categories with severity levels and examples specific to deployment context. Example taxonomies to draw upon include NIST AI RMF functions, EU AI Act article 9, ISO42001 controls.
Tags
Mandatory Control
Operational Practices
Internal policies
Requirement
·
Mandatory Requirement
Control activity
Maintaining taxonomy currency with documented change management. For example, updating based on emerging threats or incidents.
Evidence
C001.2 Documentation: Risk taxonomy reviews
Meeting notes, change log, or review documentation showing quarterly reviews of the risk taxonomy. Could include review dates, participants, decisions made (categories added/removed/modified, threshold adjustments), rationale for changes, approvals records, and version history showing taxonomy updates over time with timestamps. Can be standalone or part of broader internal audit/review or change management procedures.
Tags
Mandatory Control
Operational Practices
Internal processes
Requirement
·
Mandatory Requirement
Control activity
Evaluating harm mitigation controls using performance metrics.
Evidence
C003.4 Documentation: Filtering performance benchmarks
Test results, metrics dashboard, or evaluation report showing performance of harm controls - may include false positive/negative rates, coverage analysis of test scenarios, benchmark results against harm datasets (e.g., ToxiGen, RealToxicityPrompts), or confusion matrices showing filtering accuracy across harm categories.
Tags
Supplemental Control
Operational Practices
Internal processes
Requirement
·
Optional Requirement
Control activity
Defining high-risk output criteria drawing on risk taxonomy.
Evidence
C007.1 Documentation: Definition of high-risk recommendations criteria
Document or policy defining high-risk outputs requiring human review - should specify criteria for flagging (e.g. financial advice thresholds, medical/legal/safety domains, reputational harm triggers). Can be standalone or included in existing AI risk taxonomy/AI risk policy.
Tags
Mandatory Control
Operational Practices
Internal policies
Requirement
·
Optional Requirement
Control activity
Establishing human review workflows for flagged high-risk outputs. For example, assigning reviewers, defining escalation procedures for complex cases, managing review queues with response time tracking, and documenting review decisions.
Evidence
C007.3 Documentation: Human review workflows
Workflow documentation or ticketing system configuration showing human review process for flagged outputs - may include runbook with reviewer assignments and escalation paths, queue management in Jira/Linear/support ticketing with pending review tracking, SLA targets for review response times, or procedure document defining review decision documentation requirements.
Tags
Supplemental Control
Operational Practices
Internal processes
Requirement
Control activity
Reviewing user feedback and intervention logs regularly. For example, evaluating patterns in interventions, adapting communication methods based on user needs and emerging risk considerations.
Analyzing collected feedback using structured methodologies. For example, categorizing by risk domain, prioritizing based on frequency and severity, routing high-impact or repeat issues into product backlog or compliance workflows.
Evidence
C009.2 Documentation: User feedback & intervention reviews
Logs, reports, or dashboard showing review and analysis of user feedback and intervention patterns - may include feedback summary reports, intervention frequency analysis, categorization by risk domain, documentation of system changes made in response to patterns, or integration with product backlog/compliance workflows.
Tags
Supplemental Control
Operational Practices
Internal processes
Requirement
·
Mandatory Requirement
Control activity
Requiring human approval for sensitive tool operations. For example, requiring human confirmation before executing high-risk actions, implementing approval workflows for operations beyond autonomous boundaries.
Evidence
D003.4 Config: Human-approval workflows
Screenshot of approval workflow, code requiring human confirmation, or ticketing system for sensitive tool operations
Tags
Supplemental Control
Operational Practices
Internal processes
Requirement
·
Mandatory Requirement
Control activity
Reviewing patterns of AI tool usage. For example, identifying anomalies, updating tool permissions, and retiring unused or high-risk functions during scheduled evaluations.
Evidence
D003.5 Documentation: tool call log reviews
Reports or documentation showing periodic review of tool usage patterns, permission updates, and function retirement decisions - may include usage analytics identifying anomalies, change logs showing permission adjustments, or records of deprecated/retired tools with rationale.
Tags
Supplemental Control
Operational Practices
Internal processes
Requirement
·
Mandatory Requirement
Control activity
Assigning a breach response lead from existing staff. For example, IT manager, security officer, or designated executive with authority to engage external counsel and specialists as needed.
Defining breach notification procedures. For example, customer communications, regulatory reporting requirements, and vendor notifications based on applicable privacy laws.
Implementing security remediation measures. For example, system freeze capabilities, vulnerability fixes, access control updates, and coordination with external security consultants when internal expertise is insufficient.
Establishing evidence collection requirements with guidance on preserving evidence for potential legal review. For example, system logs, user activity records, and basic documentation.
Evidence
E001.1 Documentation: AI failure plan for security breaches
Can be standalone document or integrated in existing incident response procedures/policies
Tags
Mandatory Control
Operational Practices
AI failure plan
Requirement
·
Mandatory Requirement
Control activity
Implementing customer communication protocols. For example, disclosure procedures, explanation of corrective actions, and follow-up commitments with executive approval for significant incidents.
Establishing immediate mitigation steps with designated staff responsibilities. For example, system freeze capabilities, output suppression, customer notification, and system adjustments.
Evidence
E002.1 Documentation: AI failure plan for harmful outputs
Can be standalone document or integrated in existing incident response procedures/policies
Tags
Mandatory Control
Operational Practices
AI failure plan
Requirement
·
Mandatory Requirement
Control activity
Defining harmful output categories with reference to risk taxonomy. For example, discriminatory content, offensive material, inappropriate recommendations, ideally with concrete examples.
Coordinating external support engagement. For example, legal counsel consultation, PR support, and insurance claim procedures.
Evidence
E002.2 Documentation: Additional harmful output failure procedures
May include harmful output category definitions referenced to risk taxonomy, external support contact list (legal counsel, PR firms, insurance providers), support engagement procedures or runbooks, or escalation criteria for involving external parties.
Tags
Supplemental Control
Operational Practices
AI failure plan
Requirement
·
Mandatory Requirement
Control activity
Establishing compensation assessment procedures. For example, loss evaluation methods, settlement approaches, and payment authorization levels with appropriate approval requirements.
Implementing remediation measures. For example, system freeze capabilities, model adjustments, output validation improvements, customer notification, and enhanced monitoring.
Evidence
E003.1 Documentation: AI failure plan for hallucinations
Can be standalone document or integrated in existing incident response procedures/policies
Tags
Mandatory Control
Operational Practices
AI failure plan
Requirement
·
Mandatory Requirement
Control activity
Defining hallucination incident types.
Coordinating potential external support. For example, legal consultation for significant claims, financial review when needed, and insurance coverage activation.
Evidence
E003.2 Documentation: Additional hallucination failure procedures
May include hallucination incident categories (e.g. factual errors, incorrect recommendations), external support contact list (legal counsel, financial reviewers, insurance providers), support engagement procedures, or escalation criteria for involving external parties.
Tags
Supplemental Control
Operational Practices
AI failure plan
Requirement
·
Mandatory Requirement
Control activity
Defining AI system changes requiring approval including model selection, material changes to the meta prompt, adding / removing guardrails, changes to end-user workflow, other changes that drive material. For example, +/-10% performance on evals.
Assigning an accountable lead as approver for each of these changes. Can follow a RACI structure to formalize roles of those consulted and informed.
Evidence
E004.1 Documentation: Change approval policy and records
Documentation or policy defining which AI system changes require approval with assigned accountable leads, and approval records showing sign-offs with supporting evidence. Can be a change management policy, overview table in e.g. Notion, approval logs from Jira/Linear/GitHub, or deployment gate documentation.
Tags
Mandatory Control
Operational Practices
Internal policies
Requirement
·
Mandatory Requirement
Control activity
Conducting deployment risk assessments. For example, evaluating data sensitivity, regulatory compliance requirements, IP protection needs, and security controls for cloud vs. on-premises AI processing.
Documenting decision criteria and rationale. For example, establishing clear selection factors, maintaining records of deployment choices with business justification.
Reviewing deployment decisions when requirements change. For example, reassessing choices when data sensitivity, regulations, or threat landscape evolves.
Evidence
E005.1 Documentation: Deployment decisions
Risk assessment and decision record evaluating cloud vs. on-premises factors (e.g. data sensitivity, regulatory requirements, security controls) with documented criteria and rationale - may include deployment decision memos, risk assessment reports, and records of periodic reviews when requirements changed.
Tags
Mandatory Control
Operational Practices
Internal processes
Requirement
·
Mandatory Requirement
Control activity
Defining assessment criteria for foundational or upstream AI models. For example, data handling and ownership practices, PII controls, security measures, compliance status, open-source.
Conducting documented assessments. For example, scoring results, verification activities such as certifications reviewed and references contacted, and approval decisions.
Maintaining assessment records with sufficient detail for audit purposes and retaining due diligence evidence before vendor approval.
Evidence
E006.1 Documentation: Vendor due diligence
Vendor assessment records showing evaluation criteria, scoring results, verification activities, approval decisions with accountable leads, and retained evidence supporting the assessment. May include vendor questionnaires, security reviews, compliance documentation, or due diligence reports.
Tags
Mandatory Control
Operational Practices
Vendor ContractsInternal processes
Requirement
·
Mandatory Requirement
Control activity
Reviewing decision processes every quarter including AI system changes, foundational model selection, security assessment.
Maintaining a centralized repository of decision records and internal review of these record. For example, supporting evidence reviewed, remediation plans.
Documenting and tracking remediation of any risks identified.
Evidence
E008.1 Documentation: Internal review
Centralized repository, policy, or tickets showing quarterly internal reviews - e.g. review meeting notes or calendars, decision logs in Jira/Notion/Confluence, risk registers with remediation status, threat modelling outcomes, or audit trails of review activities.
Tags
Mandatory Control
Operational Practices
Internal processes
Requirement
·
Mandatory Requirement
Control activity
Collecting and implementing external feedback on AI systems. For example, system risks, new threat patterns, new mitigation strategies.
Evidence
E008.2 Documentation: External feedback integration
Documentation showing external feedback collected and implemented - may include external security advisories reviewed, threat intelligence integrated, third-party recommendations adopted, or records of external input incorporated into system improvements.
Tags
Supplemental Control
Operational Practices
Internal processes
Requirement
·
Mandatory Requirement
Control activity
Maintaining AI infrastructure location documentation. For example, geographic locations of foundation model processing locations and inference endpoint regions, documenting third-party AI service provider data handling locations.
Reviewing and updating documentation regularly.
Evidence
E011.1 Documentation: AI processing locations
Subprocessor list showing third-party AI provider locations, infrastructure documentation listing cloud regions and inference endpoints, or data flow diagram with geographic processing locations and version history or review dates.
Tags
Mandatory Control
Operational Practices
Trust Center
Requirement
·
Optional Requirement
Control activity
Defining quality objectives, metrics, and risk management approach for AI systems. For example, establishing performance targets, safety thresholds, risk assessment methodologies, and measurement processes appropriate to system risk level.
Evidence
E013.1 Documentation: Quality objectives and risk management
Documentation showing quality objectives, metrics, and risk management approach - may include quality metrics dashboard or reports, risk assessment documentation for AI systems, performance targets and safety thresholds, or measurement methodologies defining how quality is evaluated.
Tags
Mandatory Control
Operational Practices
Internal policies
Requirement
·
Optional Requirement
Control activity
Establishing change management, approval processes, and documentation standards. For example, defining review and approval requirements for AI system changes, assigning accountability for quality decisions, documenting design and development procedures.
Evidence
E013.2 Documentation: Change management procedures
Documentation showing change management and approval processes - may include change approval workflows or procedures, RACI matrix assigning accountability for quality decisions, design and development procedure documents, or documentation standards and templates for AI systems. May be fulfilled by evidence submitted to E004: Assign accountability.
Tags
Mandatory Control
Operational Practices
Internal policies
Requirement
·
Optional Requirement
Control activity
Establishing data management and record-keeping systems. For example, documenting data governance procedures, maintaining technical documentation, implementing record retention policies for model training data and system outputs.
Evidence
E013.4 Documentation: Data management procedures
Documentation showing data management and record-keeping practices - may include data governance policies, technical documentation standards, record retention procedures, or data lineage tracking systems for training data and system outputs.
Tags
Supplemental Control
Operational Practices
Internal policies
Requirement
·
Optional Requirement
Control activity
Documenting communication procedures with regulatory authorities and stakeholders. For example, establishing protocols for regulatory reporting, stakeholder notifications for incidents, and procedures for authority interactions.
Evidence
E013.5 Documentation: Stakeholder communication procedures
Procedures document or communication protocols - may include incident reporting templates or protocols to regulatory authorities, stakeholder notification procedures for serious incidents, guidelines for interacting with competent authorities or notified bodies, or escalation procedures for regulatory communications.
Tags
Supplemental Control
Operational Practices
Internal processes
Requirement
·
Optional Requirement
Control activity
Defining policies for sharing transparency documentation with external stakeholders. For example, establishing when reports are shared, specifying recipient categories, determining what information is disclosed to each stakeholder type.
Documenting sharing procedures including approval workflows, version control, and distribution tracking. For example, establishing approval requirements before external sharing, maintaining version control of shared documents, tracking which stakeholders received which versions.
Evidence
E017.3 Documentation: Transparency report sharing policy
Policy document defining transparency sharing practices - may include sharing triggers, recipient categories with disclosure levels (regulators, customers, affected parties, public), or matrix mapping stakeholder types to shared documentation (model cards, datasheets, performance reports, incident summaries).
Tags
Supplemental Control
Operational Practices
Internal processesInternal policies