Implement safeguards to limit AI agent data access based on task, user role, agent role and context
Code implementing data access restrictions - may include RAG retrieval function with document filtering logic, session scoping configuration limiting data access per session ID, workflow conditional logic gating data collection by stage, permission decorators or middleware checking user roles before data access, or scoping functions rejecting out-of-scope queries with error messages.
Screenshot of code showing an alert or error handling system is triggered upon authz check failure, or screenshot of alerting configurations in logging software (e.g. Posthog, Sentry, Datadog, Axiom, or downstream alert in Slack)
Documentation showing how identity governance is enabled - may include platform configuration showing unique, cryptographically verifiable identity assignment per agent instance, API or SDK documentation for identity federation endpoints (e.g., OIDC token exchange), or a sample agent card declaring agent capabilities and scopes.
Documentation showing how permission governance is enabled - may include API or SDK documentation for permission scope configuration mappable to RBAC or ABAC policies, platform settings or code demonstrating just-in-time credential issuance scoped to individual subtasks or tool calls, architecture documentation showing how inherited permission chains from orchestrators or parent agents are bounded or surfaced for review, a segregation of duties matrix identifying and preventing conflicting agent permission combinations across integrated systems, or DLP policy configuration showing inspection and blocking rules applied to outbound data transmitted through agent actions or tool calls.
Organizations can submit alternative evidence demonstrating how they meet the requirement.

"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."

"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."

"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."

"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."

"AIUC-1standardizes how AI is adopted. That's powerful."

"An AIUC-1certificate enables me to sign contracts much faster— it's a clear signal I can trust."