AIUC-1
Context
IntroductionCertificate overview
Framework comparisons
ChangelogAIUC-1 ConsortiumProvide input on AIUC-1Contact
Standard
A. Data & Privacy
B. Security
Third-party testing of adversarial robustnessDetect adversarial inputManage public release of technical detailsPrevent AI endpoint scrapingImplement real-time input filteringPrevent unauthorized AI agent actionsEnforce user access privileges to AI systemsProtect AI system deployment environmentLimit output over-exposure
C. Safety
D. Reliability
E. Accountability
F. Society
Certification
AIUC-1 certification Scoping Accredited auditors FAQ
Evidence overview
AIUC-1

Share your details and let us know how you hope to use AIUC-1

I am interested in...

The Security, Safety, and Reliability standard for AI agents

Stay up to date with AIUC-1

AIUC-1
AIUC-1.COM

© 2026.AIUC

OverviewChangelogConsortium

LEGAL

Privacy PolicyTerms of Service
AIUC-1 Standard
→
B. Security
→
B006. Prevent unauthorized AI agent actions
B006

Prevent unauthorized AI agent actions

Implement safeguards to prevent AI agents from performing actions beyond intended scope and authorized privileges

Keywords

Access PermissionsAgent Permissions

Application

Mandatory

Frequency

Every 12 months

Type

Preventative

Crosswalks

NIST AI RMF
MAP 2.1: Task definition
OWASP Top 10
LLM08:25 - Vector and Embedding Weaknesses
LLM10:25 - Unbounded Consumption
CSA AICM
AIS-11: Agents Security Boundaries
IAM-19: Agent Access Restriction
DSP-07: Data Protection by Design and Default
OWASP AIVSS
Agent Access Control Violation
Agent Orchestration and Multi-Agent Exploitation
IBM AI Risk Atlas
IBM 5: Agentic AI - Misaligned actions
IBM 7: Agentic AI - Unauthorized use
IBM 8: Agentic AI - Exploit trust mismatch
Cisco AI Security Framework
AITech-1.3: Goal Manipulation
AITech-3.1: Masquerading / Obfuscation / Impersonation
AITech-4.1: Agent Injection
AITech-4.2: Context Boundary Attacks
AITech-5.2: Configuration Persistence
AITech-7.2: Memory System Corruption
AITech-14.2: Abuse of Delegated Authority

Control activities

Typical evidence

Implementing technical restrictions that limit agent capabilities to authorized scope. For example, restricting agent access to approved backend services, APIs and MCP servers, enforcing network segmentation or API gateway rules, or implementing service-level authorization preventing access to sensitive systems.
B006.1 Config: Agent service access restrictions

Configuration showing technical limitations on agent backend access - may include API gateway rules restricting accessible services, network policies defining allowed endpoints, MCP server allowlist or registration configuration restricting which MCP servers and tools the agent may connect to, service-level authorization configuration, or architecture diagram showing agent isolation boundaries including MCP server placement and network segmentation.

Category

Technical Implementation
Engineering Code
Automation
Deploying monitoring and alerting for agent actions that exceed security boundaries. For example, logging all agent service interactions, alerting on access attempts to unauthorized systems or APIs, or anomaly detection flagging unusual connection patterns.
B006.2 Config: Agent security monitoring and alerting

Implementation of monitoring configuration tracking agent security-relevant actions - may include logging setup capturing agent service calls and authentication attempts, alert rules for unauthorized system access, security monitoring dashboard showing agent infrastructure interactions, or example logs demonstrating boundary violations are detected.

Category

Technical Implementation
Engineering CodeLogs
Automation
Implementing additional safeguards to contain runtime risk. For example, applying sandboxed execution environments with restricted filesystem, network, and credential access for first-party MCP servers; monitoring MCP tool definitions for unauthorized changes after initial approval; providing pre-execution authorization hooks that verify runtime tool calls against defined policy before execution proceeds; or equivalent containment approaches appropriate to the deployment architecture..
B006.3 Config: Execution-level safeguards

Configuration, documentation or code demonstrating runtime containment controls applied to agents and MCP servers - may include tool definition integrity controls showing how unauthorized post-approval changes are detected, container or VM configuration showing restricted filesystem and network access for first-party MCP servers, pre-execution hook or policy engine configuration showing tool calls are verified at runtime.

Category

Technical Implementation
Engineering Code
Automation

Organizations can submit alternative evidence demonstrating how they meet the requirement.

AIUC-1 is built with industry leaders

Phil Venables

"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."

Google Cloud
Phil Venables
Former CISO of Google Cloud
Dr. Christina Liaghati

"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."

MITRE
Dr. Christina Liaghati
MITRE ATLAS lead
Hyrum Anderson

"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."

Cisco
Hyrum Anderson
Senior Director, Security & AI
Prof. Sanmi Koyejo

"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."

Stanford
Prof. Sanmi Koyejo
Lead for Stanford Trustworthy AI Research
John Bautista

"AIUC-1standardizes how AI is adopted. That's powerful."

Orrick
John Bautista
Partner at Orrick
Lena Smart

"An AIUC-1certificate enables me to sign contracts much faster— it's a clear signal I can trust."

SecurityPal
Lena Smart
Head of Trust for SecurityPal and former CISO of MongoDB