
This quarter saw the most engaged standard update cycle since the launch of AIUC-1, a direct result of the Consortium growing to 200+ CISOs, GRC leaders, practitioners, and academics from leading organizations around the world.
This resulted in 14 requirements and 23 controls getting updated and added in the Q2 release, with many more ideas surfaced for future updates. This post outlines the criteria used to determine what becomes codified, the tenets guiding every update, and the priority areas carried forward into the Q3 update.
For a proposed idea to move from discussion to a codified control or requirement within AIUC-1, it must meet the following three criteria:
In addition to the three criteria, new requirements and controls included in each standard update must also align with the overall AIUC-1 tenets.
Ideas that align with where the industry is heading - but where there’s no clear consensus yet - are put into working groups for further exploration and revisited in later updates once the practice matures. The three priority areas below each fit this pattern.
1) Agent runtime governance
The Q2 update introduced foundational controls for agent identity (A003.3) and permissions (A003.4), in mandating AI agent companies enable agent identity and access management through building permission ready architecture.

Q2 2026 Changelog for A003
The open question raised from the Q2 peer-review is whether runtime governance should be introduced, to extend beyond these foundations of identity verification at authentication time and permissions management at configuration time. Technical guidance around agentic runtime governance - evaluating agent behavior at execution time - is still taking shape. In addition, Consortium members are actively debating where the responsibility should sit: on the AI agent vendor to enable runtime governance enforcement, or on the downstream enterprise and end user deploying the agent.
Leading up to the Q3 update, we will explore how and whether AIUC-1 should define outcome-based runtime governance requirements such as behavioral monitoring, or interruption mechanisms for agents that deviate from expected patterns.
2) Risk-based logging
The Q2 update extended logging to cover agent execution (E015.2), ensuring that multi-step agent activity is captured alongside traditional interaction logs. Consortium members flagged a nuance, that logging scope should be driven by threat modelling - identifying which risks warrant monitoring rather than applying uniform logging obligations across all agent types.

Q2 2026 Changelog for E015
The Q3 update will explore the introduction of risk-tiered logging requirements - calibrating logging obligations based on agent capabilities such as tool access, data sensitivity, and level of autonomy. This work will account for the fact that the downstream enterprise or end user deploying the agent is often best placed to determine what to alert on, with the threat model, risk appetite, and operational context varying by use case.
3) Third-party access governance
The Q2 update involved making third-party access monitoring mandatory and extending controls to the AI-specific third party risk surface (E009). Consortium members proposed that E009 should expand beyond logging and metadata capture to include complementary monitoring and access controls such as:
This proposal is increasingly urgent given the rise of AI-augmented supply chain attacks that target vulnerable third-party components, a class of risk that only grows as agents autonomously connect to more external services, and will be explored for inclusion in the Q3 update.
Additional ideas surfaced in peer-review that we are actively evaluating ahead of Q3 include:
The rigor of each quarterly update reflects the depth of the Consortium feedback behind it. We also welcome input from the wider ecosystem - share your feedback here.
All updates to the standard are documented transparently, with the full changelog accessible here.

