AIUC-1 is updated formally each quarter to ensure that the standard evolves as technology, risk, and regulation evolves.
The most recent version of AIUC-1 was released on April 15, 2026.
The next version of AIUC-1 will be released on July 15, 2026.
For this update, focus has been on MCP and A2A protocol security, third-party risk management, and agent identity and access management. This quarter’s refresh updates 14 requirements and 23 controls.
Introduced new controls for MCP and A2A protocol security, standardizing authentication, transport, runtime containment, and logging across agentic interfaces
Expanded third-party risk controls including making third-party access monitoring mandatory
Expanded controls for agent identity, permissions, and access management
Q2 2026
All requirements
Updated typical evidence descriptions to move away from screenshots in favor of substantive and verifiable evidence
Q2 2026
A002: Establish output data policy
Included both opt-in and out practices
Q2 2026
A002.1 Documentation: Output usage and ownership policy
Included both opt-in and out practices, ensuring balanced coverage of consent models
Q2 2026
A002.2 Config: Opt-in/opt-out and output deletion implementation
Added new control to incorporate implementation testing to A002
Q2 2026
A003: Limit AI agent data collection
Specified that the requirement covers data access more generally, and included more controls on agent IAM within it
Q2 2026
A003.1 Config: Data access scoping
Clarified the control to cover agent access and identity management, not just data collection
Q2 2026
A003.3 Config: Agent identity management
Separated agent identity and access management into distinct controls, with a focus on providing configurable, auditable architecture that integrates with enterprise IAM systems
Q2 2026
A003.4 Config: Agent access and permissions management
Separated agent identity and access management into distinct controls, with a focus on providing configurable, auditable architecture
Q2 2026
B002: Detect adversarial input
Clarified that monitoring is to enable responding to adversarial inputs
Q2 2026
B006: Prevent unauthorized AI agent actions
Changed on a controls level - MCP coverage and additional execution-level containment controls
Q2 2026
B006.1 Config: Agent service access restrictions
Covered MCP server access alongside existing API and service-level restrictions
Q2 2026
B006.3 Config: Execution-level safeguards
Added execution-level containment controls that limit the blast radius when an agent or approved MCP server behaves unexpectedly at runtime
Q2 2026
B008: Protect AI system deployment environment
Expanded scope of requirement from the AI model only to system
Q2 2026
B008.1 Config: Model access controls
Expanded scope of control from the AI model only to system
Q2 2026
B008.2 Config: API and agentic interface authentication
Expanded deployment security controls to address MCP and A2A protocols alongside traditional API endpoints, with dedicated controls for authentication, transport security, and message integrity across all agentic interfaces
Q2 2026
B008.3 Config: API and agentic interface transport security
See above
Q2 2026
B008.4 Config: Agentic interface data integrity
See above
Q2 2026
C001: Define AI risk taxonomy
Generalized the risk taxonomy requirement and changed testing frequency to every 12 months
Q2 2026
C001.2 Documentation: Risk taxonomy reviews
Aligned testing frequency to a 12-month cycle consistent with the risk management framework update schedule
Q2 2026
C006: Prevent output vulnerabilities
Clarified that the requirement is in scope for AI agents that generate code (see C006.1, C006.2, C006.3), and text (see C006.2)
Q2 2026
C006.1 Config: Output sanitization
Clarified that the control is in scope for code-generating AI agents
Q2 2026
C006.2 Demonstration: Warning labels for untrusted content
See above
Q2 2026
C006.3 Config: Adversarial output detection
See above
Q2 2026
C007: Flag high risk outputs for human review
Clarified that C007 is about human in the loop via updated label
Q2 2026
C007.1 Documentation: Definition of high-risk output criteria
Expanded requirement scope from recommendations to generalized outputs
Q2 2026
C007.3 Documentation: Human review workflows
Included example of auditing human review workflows (i.e., checking the effectiveness of oversight over time) to mitigate against ‘automation bias’
Q2 2026
C009: Enable real-time feedback and intervention
Changed on a controls level - synthesized controls and added in control to action user feedback
Q2 2026
C009.2 Documentation: User feedback & intervention reviews
Included practical validation and actioning of relevant user feedback, and streamlined three controls into one
Q2 2026
D003: Restrict unsafe tool calls
Changed on a controls level - extends tool call validation to cover MCP servers alongside approved functions, expands scope of human approval for sensitive tool operations to cover multi-step workflows
Q2 2026
D003.1 Config: Tool authorization & validation
Extended tool call validation to cover MCP servers alongside approved functions
Q2 2026
D003.3 Config: Tool call log
Extended tool call validation to cover MCP servers alongside approved functions
Q2 2026
D003.4 Config: Human-approval workflows
Expanded scope to cover multi-step workflows, reflecting trends of AI agents increasingly chaining tool calls across sequential operations rather than executing single actions in isolation
Q2 2026
E005: Document data storage security
Clarified that the requirement is around ensuring companies establish clear security and compliance requirements for hosting platforms, rather than the act of cloud vs on-prem assessment
Q2 2026
E009: Monitor third-party access
Enforced E009 as a mandatory control
Q2 2026
E015: Log AI system activity
Expanded scope of requirement from the AI model only to system
Q2 2026
E015.2 Config: AI agent logging implementation
Extended logging to cover the intermediate steps between input and output (i.e., tool calls, sub-agent actions, and provenance metadata) getting traceability across the full execution chain
Q2 2026
E016: Implement AI disclosure mechanisms
Changed on a controls level - adjusts disclosure to AI agents and systems
Q2 2026
E016.4 Demonstration: Automation AI disclosure
Adjusted disclosure to AI agents and systems
Detailed comparison of previous standards (October 1, 2025 and January 15, 2026) and current standard (April 15, 2026) is available on Github here
January 15, 2026
October 1, 2025
July 22, 2025
First launch of the standard
Customer-focused.We prioritize requirements that enterprise customers demand and vendors can pragmatically meet— increasing confidence without adding unnecessary compliance.
AI-focused. We do not cover non-AI risks that are addressed in frameworks or regulations like SOC 2, ISO 27001, or GDPR.
Insurance-enabling. We prioritize risks that lead to direct harms and financial losses.
Adapts to regulation. We update AIUC-1 to make it easier to comply with new regulations.
Adapts to AI progress. We update AIUC-1 to keep up with new capabilities, like reasoning capabilities and new modalities.
Adapts to the threat landscape. We update AIUC-1 in response to real-world incidents.
Continuous improvement. We regularly update the standard based on real-world deployment experience and stakeholder feedback.
Predictability.We review the standard and push updates quarterly— on January 15, April 15, July 15, and October 15 of each year.
Transparency. We keep a public changelog and share our lessons.
Backward compatibility. Existing certifications remain valid during transition periods.
We welcome feedback, ideas, suggestions, and criticism— provide input on AIUC-1.