For this update, focus has been on detailing guidance on control implementation - including publishing typical evidence submitted to pass AIUC-1 requirements transparently on the website. In addition, several requirements were updated to integrate, e.g., stronger PII protection in logs, threat modelling in pre-deployment testing, multimodal coverage of AI labelling, pickle-file security tools, and more. Finally, more info on the AIUC-1 certification process and scoping questionnaire was published.
Updated 26 requirements based on audit experience, input from technical contributors, feedback from AIUC-1 Consortium members, and external peer-review comments.
Detailed typical evidence submitted to pass AIUC-1 with suggested locations and concrete examples, making it easier for organizations to begin a readiness assessment of AIUC-1
Published AIUC-1 scoping questionnaire and certification process details to ensure consistent application of AIUC-1 across accredited auditors
Q1 2026
All requirements
Enabled Excel export of all requirements and controls for easier readiness assessment
Q1 2026
All requirements
Tagged all requirements with relevant AI agent capabilities which are used as input for the scoping questionnaire to ensure appropriate application of AIUC-1 requirements
Q1 2026
All requirements
Defined and published typical evidence for all controls tagged by evidence category and typical location
Q1 2026
Scoping questionnaire
Published the AIUC-1 scoping questionnaire, enabling consistent approach to scoping by accredited auditors
Q1 2026
Should include and May include control activities
Clarified application of control activities
For controls labeled "Should include": Organizations must demonstrate core controls to meet the requirement. Auditors may accept alternative implementations that achieve equivalent outcomes
For controls labeled "May include": Supplemental controls demonstrating additional safeguards. Recommended when particularly relevant to the organization's use case
Q1 2026
A001: Establish input data policy
Specified evidence requirements across policies and enforcement of policies, particularly for data retention
Q1 2026
A003: Limit AI agent data collection
Removed optional control activity focused on dynamic context-based restrictions given limited technical pathways for implementation
Q1 2026
A004: Protect IP & trade secrets
Specified controls with a stricter requirement of user guardrails
Provided specific guidance on foundation model IP protections
Added supplemental safeguards
Q1 2026
A005: Prevent cross-customer data exposure
Revised controls to avoid overlap and specify the intent of the requirement
Removed optional controls on adapting safeguards to industry-specific risks
Removed optional controls on inference-time data isolation
Q1 2026
A006: Prevent PII leakage
Increased PII protection requirements for logs
Removed incident management control to avoid overlap with E001
Removed cross-tenant contaminant control to avoid overlap with A005
Q1 2026
A007: Prevent IP violations
Clarified controls with emphasis on foundation model IP protections
Added additional safeguards tagged as supplemental controls
Removed third-party IP incident response control to avoid overlap with other requirements
Q1 2026
B006: Limit AI agent system access
Clarified the requirement's focus on security aspects of system limiting
Emphasized agent privilege restrictions and monitoring
Q1 2026
B008: Protect model deployment environment
Included Trail of Bits' Fickling tool as example safeguard in control B008.4 based on peer-review feedback
Q1 2026
B009: Limit output over-exposure
Tagged user notification control as supplemental, recognizing it is not always in the organization's interest to disclose output limitations
Minor revisions to output fidelity limitations to align with MITRE AML-M0002
Q1 2026
C001: Define AI risk taxonomy
Simplified controls to focus on AI Risk Taxonomy documentation and reviews
Q1 2026
C002: Conduct pre-deployment testing
Included explicit reference to threat modelling in controls based on peer-review feedback
Q1 2026
C003: Prevent harmful outputs
Removed control on review and appeal mechanisms, which are beyond the intent of the requirement
Q1 2026
C006: Prevent output vulnerabilities
Removed control on logging sanitation activities as this is beyond standard practice for organizations
Q1 2026
C008: Monitor AI risk categories
Removed control on proactive detection as this is already covered by C008.2
Q1 2026
C009: Enable real-time feedback and intervention
Revised controls to cover other modalities (e.g. voice, image)
Tagged review of intervention logs as a supplemental control
Q1 2026
D003: Restrict unsafe tool calls
Revised controls to avoid overlap with A003 and B006
Emphasized tool call validation and monitoring specifically
Q1 2026
E005: Assess cloud vs on-prem processing
Revised controls to focus on cloud vs. on-prem decisions
Removed security and vendor due diligence controls covered in other requirements
Q1 2026
E007: Document system change approvals
This requirement was merged into E004: Assign accountability, which already requires documenting approval with supporting evidence
Q1 2026
E009: Monitor third-party access
Clarified monitoring configuration requirement in place of purely documenting procedures
Q1 2026
E010: Establish AI acceptable use policy
Combined supplemental controls into one without changing the nature of the control activities
Q1 2026
E013: Implement quality management system
Controls updated to simplify the requirement while fulfilling EU AI Act Article 17
Q1 2026
E014: Share transparency reports
This requirement was merged into E017 to avoid overlap and to recognize transparency policy sharing procedures
Q1 2026
E015: Log model activity
Strengthened controls around PII protection
Improved log immutability and tamper-proofing based on peer-review feedback
Q1 2026
E016: Implement AI disclosure mechanisms
Revised control activities to ensure coverage of multiple modalities (e.g. voice, text, image)
Q1 2026
F001: Prevent AI cyber misuse
Removed control requiring a signed attestation that cyber misuse safeguards remain active
Encouraged organizations using open-source or fine-tuned third-party models to opt into the supplemental control
Q1 2026
F002: Prevent catastrophic misuse
Removed control requiring a signed attestation that CBRN safeguards remain active
Encouraged organizations using open-source or fine-tuned third-party models to opt into the supplemental control
Detailed comparison of October 1, 2025 and January 15, 2026 is available on Github here