AIUC-1
Context
IntroductionCertificate overview
Framework comparisons
ChangelogAIUC-1 ConsortiumProvide input on AIUC-1Contact
Standard
A. Data & Privacy
B. Security
C. Safety
D. Reliability
E. Accountability
AI failure plan for security breachesAI failure plan for harmful outputsAI failure plan for hallucinationsAssign accountabilityDocument data storage securityConduct vendor due diligence[Retired] Document system change approvalsReview internal processesMonitor third-party accessEstablish AI acceptable use policyRecord processing locationsDocument regulatory complianceImplement quality management system[Retired] Share transparency reportsLog AI system activityImplement AI disclosure mechanismsDocument system transparency policy
F. Society
Certification
AIUC-1 certification Scoping Accredited auditors FAQ
Evidence overview
AIUC-1

Share your details and let us know how you hope to use AIUC-1

I am interested in...

The Security, Safety, and Reliability standard for AI agents

Stay up to date with AIUC-1

AIUC-1
AIUC-1.COM

© 2026.AIUC

OverviewChangelogConsortium

LEGAL

Privacy PolicyTerms of Service
AIUC-1 Standard
→
E. Accountability
→
E010. Establish AI acceptable use policy
E010

Establish AI acceptable use policy

Establish and implement an AI acceptable use policy

Keywords

Acceptable UseBreach

Application

Mandatory

Frequency

Every 12 months

Type

Preventative

Crosswalks

ISO 42001
A.2.2: AI policy
A.9.2: Processes for responsible use of AI systems
A.9.4: Intended use of the AI system
A.2.4: Review of the AI policy
A.9.3: Objectives for responsible use of AI system
4.1: Understanding the organization and its context
4.3: Determining the scope of the AI management system
5.2: AI policy
NIST AI RMF
GOVERN 1.2: Trustworthy AI policies
MAP 1.6: System requirements
MAP 3.3: Application scope
MAP 3.4: Operator proficiency
MEASURE 2.4: Production monitoring
OWASP Top 10
LLM10:25 - Unbounded Consumption
CSA AICM
GRC-09: Acceptable Use of the AI Service
OWASP AIVSS
Agent Orchestration and Multi-Agent Exploitation
Cisco AI Security Framework
AITech-13.1: Disruption of Availability
AITech-18.1: Fraudulent Use
AITech-18.2: Malicious Workflows

Control activities

Typical evidence

Defining prohibited AI usage for end-users. For example, jailbreak attempts, malicious prompt injection, unauthorized data extraction, generation of harmful content, and misuse of customer data.
E010.1 Documentation: AI acceptable use policy

Policy document defining acceptable and/or prohibited AI usage - can be standalone document or parts of, e.g., terms of service

Category

Legal Policies
Acceptable Use Policy
Universal
Implementing detection and monitoring tools. For example, prompt analysis, output filtering, usage pattern anomalies, and suspicious access attempts.
E010.2 Config: AUP violation detection

Code, configuration, or monitoring system detecting acceptable use policy violations - may include prompt analysis logic, output filtering rules, anomaly detection for usage patterns, or alerting on suspicious access attempts.

Category

Technical Implementation
Engineering Code
Universal
Implementing user feedback when policy is breached. For example, showing alerts or error messages when inputs violate acceptable use.
E010.3 Demonstration: User notification for AUP breaches

User-facing alerts or error messages displayed when acceptable use policy is violated - may include in-product warning messages, blocked request notifications, or error screens explaining policy violations.

Category

Technical Implementation
Product
Universal
Real-time monitoring, blocking, or alerting capabilities.
Maintaining logging and tracking systems. For example, incident creation, violation tracking with case assignment and resolution documentation.
Conducting regular effectiveness reviews. For example, quarterly analysis of violation trends, tool performance assessment, policy updates based on emerging threats, and user training adjustments.
E010.4 Documentation: Guardrails enforcing acceptable use

Documentation or screenshots showing additional AUP enforcement mechanisms - may include real-time blocking/alerting systems, violation tracking logs with incident management, effectiveness review reports analyzing violation trends and policy updates, or training materials addressing emerging misuse patterns.

Category

Technical Implementation
Engineering Practice
Universal

Organizations can submit alternative evidence demonstrating how they meet the requirement.

AIUC-1 is built with industry leaders

Phil Venables

"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."

Google Cloud
Phil Venables
Former CISO of Google Cloud
Dr. Christina Liaghati

"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."

MITRE
Dr. Christina Liaghati
MITRE ATLAS lead
Hyrum Anderson

"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."

Cisco
Hyrum Anderson
Senior Director, Security & AI
Prof. Sanmi Koyejo

"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."

Stanford
Prof. Sanmi Koyejo
Lead for Stanford Trustworthy AI Research
John Bautista

"AIUC-1standardizes how AI is adopted. That's powerful."

Orrick
John Bautista
Partner at Orrick
Lena Smart

"An AIUC-1certificate enables me to sign contracts much faster— it's a clear signal I can trust."

SecurityPal
Lena Smart
Head of Trust for SecurityPal and former CISO of MongoDB