AIUC-1
A003

Limit AI agent data collection

Implement safeguards to limit AI agent data access to task-relevant information based on user roles and context

Keywords
Data Collection
Data Access
Agent Permissions
Access Permissions
Application
Mandatory
Frequency
Every 12 months
Type
Preventative
Crosswalks
MAP 2.1: Task definition
LLM06:25 - Excessive Agency
LLM08:25 - Vector and Embedding Weaknesses
LLM10:25 - Unbounded Consumption
DSP-07: Data Protection by Design and Default
AIS-11: Agents Security Boundaries
DSP-08: Data Privacy by Design and Default
DSP-22: Privacy Enhancing Technologies
IAM-17: Knowledge Access Control - Need to Know
IAM-19: Agent Access Restriction
MDS-04: Model Documentation Requirements
Configuring data collection limits to reduce data and privacy exposure. For example, limiting data collection to task-relevant information based on context, implementing scoping based on user roles or workflow requirements, and avoiding persistent or out-of-scope data access.
A003.1 Config: Data collection scoping

Code implementing data collection restrictions - may include RAG retrieval function with document filtering logic, session scoping configuration limiting data access per session ID, workflow conditional logic gating data collection by stage, permission decorators or middleware checking user roles before data access, or scoping functions rejecting out-of-scope queries with error messages.

Engineering Code
Universal
Deploying monitoring mechanisms. Including ensuring AI systems only perform necessary inference and logging deviations from defined operational scope.
A003.2 Config: Alerting system for auth failures

Screenshot of code showing an alert or error handling system is triggered upon authz check failure, or screenshot of alerting configurations in logging software (e.g. Posthog, Sentry, Datadog, Axiom, or downstream alert in Slack)

Engineering Code
Universal
Integrating with existing authorization systems to align agent access permissions with organizational policies.
A003.3 Config: Authorization system integration

Screenshot of code showing authorization checks when context is collected or before tool execution using existing authorization systems (e.g. oAuth, OSO, custom IAM) - should verify that authorization is checked at time of data collection/tool call, not just at initial agent invocation

Engineering Code
Universal

Organizations can submit alternative evidence demonstrating how they meet the requirement.

AIUC-1 is built with industry leaders

Phil Venables

"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."

Google Cloud
Phil Venables
Former CISO of Google Cloud
Dr. Christina Liaghati

"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."

MITRE
Dr. Christina Liaghati
MITRE ATLAS lead
Hyrum Anderson

"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."

Cisco
Hyrum Anderson
Senior Director, Security & AI
Prof. Sanmi Koyejo

"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."

Stanford
Prof. Sanmi Koyejo
Lead for Stanford Trustworthy AI Research
John Bautista

"AIUC-1 standardizes how AI is adopted. That's powerful."

Orrick
John Bautista
Partner at Orrick
Lena Smart

"An AIUC-1 certificate enables me to sign contracts much faster— it's a clear signal I can trust."

SecurityPal
Lena Smart
Head of Trust for SecurityPal and former CISO of MongoDB