AIUC-1 is updated formally each quarter to ensure that the standard evolves as technology, risk, and regulation evolves.
The next version of AIUC-1 will be released on January 15, 2026. Going forward, quarterly updates will be released on the 15th of each quarter.
The most recent version of AIUC-1 was released on October 1, 2025.
These tenets guide how we update the standard:
Customer-focused. We prioritize requirements that enterprise customers demand and vendors can pragmatically meet— increasing confidence without adding unnecessary compliance.
AI-focused. We do not cover non-AI risks that are addressed in frameworks or regulations like SOC 2, ISO 27001, or GDPR.
Insurance-enabling. We prioritize risks that lead to direct harms and financial losses.
Adapts to regulation. We update AIUC-1 to make it easier to comply with new regulations.
Adapts to AI progress. We update AIUC-1 to keep up with new capabilities, like reasoning capabilities and new modalities.
Adapts to the threat landscape. We update AIUC-1 in response to real-world incidents.
Continuous improvement. We regularly update the standard based on real-world deployment experience and stakeholder feedback.
Predictability. We review the standard and push updates quarterly— on January 15, April 15, July 15, and October 15 of each year.
Transparency. We keep a public changelog and share our lessons.
Backward compatibility. Existing certifications remain valid during transition periods.
We welcome feedback, ideas, suggestions, and criticism— provide input on AIUC-1.
For this update, focus has been on detailing guidance on control implementation - including publishing typical evidence submitted to pass AIUC-1 requirements transparently on the website. In addition, several requirements were updated to integrate, e.g., stronger PII protection in logs, threat modelling in pre-deployment testing, multimodal coverage of AI labelling, pickle-file security tools, and more. Finally, more info on the AIUC-1 certification process and scoping questionnaire was published.
Updated 26 requirements based on audit experience, input from technical contributors, feedback from AIUC-1 Consortium members, and external peer-review comments.
Detailed typical evidence submitted to pass AIUC-1 with suggested locations and concrete examples, making it easier for organizations to begin a readiness assessment of AIUC-1
Published AIUC-1 scoping questionnaire and certification process details to ensure consistent application of AIUC-1 across accredited auditors
2025-01-15
All requirements
Defined and published typical evidence for all controls tagged by evidence category and typical location
2025-01-15
Scoping questionnaire
Published the AIUC-1 scoping questionnaire, enabling consistent approach to scoping by accredited auditors
2025-01-15
All requirements
Tagged all requirements with relevant AI agent capabilities which are used as input for the scoping questionnaire to ensure appropriate application of AIUC-1 requirements
2025-01-15
All requirements
Enabled Excel export of all requirements and controls for easier readiness assessment
2025-01-15
Should include and May include control activities
Clarified application of control activities
For controls labeled "Should include": Organizations must demonstrate core controls to meet the requirement. Auditors may accept alternative implementations that achieve equivalent outcomes
For controls labeled "May include": Supplemental controls demonstrating additional safeguards. Recommended when particularly relevant to the organization's use case
2025-01-15
A001: Establish input data policy
Specified evidence requirements across policies and enforcement of policies, particularly for data retention
2025-01-15
A003: Limit AI agent data collection
Removed optional control activity focused on dynamic context-based restrictions given limited technical pathways for implementation
2025-01-15
A004: Protect IP & trade secrets
Specified controls with a stricter requirement of user guardrails
Provided specific guidance on foundation model IP protections
Added supplemental safeguards
2025-01-15
A005: Prevent cross-customer data exposure
Revised controls to avoid overlap and specify the intent of the requirement
Removed optional controls on adapting safeguards to industry-specific risks
Removed optional controls on inference-time data isolation
2025-01-15
A006: Prevent PII leakage
Increased PII protection requirements for logs
Removed incident management control to avoid overlap with E001
Removed cross-tenant contaminant control to avoid overlap with A005
2025-01-15
A007: Prevent IP violations
Clarified controls with emphasis on foundation model IP protections
Added additional safeguards tagged as supplemental controls
Removed third-party IP incident response control to avoid overlap with other requirements
2025-01-15
B006: Limit AI agent system access
Clarified the requirement's focus on security aspects of system limiting
Emphasized agent privilege restrictions and monitoring
2025-01-15
B008: Protect model deployment environment
Included Trail of Bits' Fickling tool as example safeguard in control B008.4 based on peer-review feedback
2025-01-15
B009: Limit output over-exposure
Tagged user notification control as supplemental, recognizing it is not always in the organization's interest to disclose output limitations
Minor revisions to output fidelity limitations to align with MITRE AML-M0002
2025-01-15
C001: Define AI risk taxonomy
Simplified controls to focus on AI Risk Taxonomy documentation and reviews
2025-01-15
C002: Conduct pre-deployment testing
Included explicit reference to threat modelling in controls based on peer-review feedback
2025-01-15
C003: Prevent harmful outputs
Removed control on review and appeal mechanisms, which are beyond the intent of the requirement
2025-01-15
C006: Prevent output vulnerabilities
Removed control on logging sanitation activities as this is beyond standard practice for organizations
2025-01-15
C008: Monitor AI risk categories
Removed control on proactive detection as this is already covered by C008.2
2025-01-15
C009: Enable real-time feedback and intervention
Revised controls to cover other modalities (e.g. voice, image)
Tagged review of intervention logs as a supplemental control
2025-01-15
D003: Restrict unsafe tool calls
Revised controls to avoid overlap with A003 and B006
Emphasized tool call validation and monitoring specifically
2025-01-15
E005: Assess cloud vs on-prem processing
Revised controls to focus on cloud vs. on-prem decisions
Removed security and vendor due diligence controls covered in other requirements
2025-01-15
E007: Document system change approvals
This requirement was merged into E004: Assign accountability, which already requires documenting approval with supporting evidence
2025-01-15
E009: Monitor third-party access
Clarified monitoring configuration requirement in place of purely documenting procedures
2025-01-15
E010: Establish AI acceptable use policy
Combined supplemental controls into one without changing the nature of the control activities
2025-01-15
E013: Implement quality management system
Controls updated to simplify the requirement while fulfilling EU AI Act Article 17
2025-01-15
E014: Share transparency reports
This requirement was merged into E017 to avoid overlap and to recognize transparency policy sharing procedures
2025-01-15
E015: Log model activity
Strengthened controls around PII protection
Improved log immutability and tamper-proofing based on peer-review feedback
2025-01-15
E016: Implement AI disclosure mechanisms
Revised control activities to ensure coverage of multiple modalities (e.g. voice, text, image)
2025-01-15
F001: Prevent AI cyber misuse
Removed control requiring a signed attestation that cyber misuse safeguards remain active
Encouraged organizations using open-source or fine-tuned third-party models to opt into the supplemental control
2025-01-15
F002: Prevent catastrophic misuse
Removed control requiring a signed attestation that CBRN safeguards remain active
Encouraged organizations using open-source or fine-tuned third-party models to opt into the supplemental control
October 1, 2025
July 22, 2025
First launch of the standard
We welcome feedback, ideas, suggestions, and criticism— provide input on AIUC-1.