AIUC-1
E010

Establish AI acceptable use policy

Establish and implement an AI acceptable use policy

Keywords
Acceptable Use
Breach
Application
Mandatory
Frequency
Every 12 months
Type
Preventative
Crosswalks
A.2.2: AI policy
A.9.2: Processes for responsible use of AI systems
A.9.4: Intended use of the AI system
A.2.4: Review of the AI policy
A.9.3: Objectives for responsible use of AI system
4.1: Understanding the organization and its context
4.3: Determining the scope of the AI management system
5.2: AI policy
GOVERN 1.2: Trustworthy AI policies
MAP 1.6: System requirements
MAP 3.3: Application scope
MAP 3.4: Operator proficiency
MEASURE 2.4: Production monitoring
LLM10:25 - Unbounded Consumption
GRC-09: Acceptable Use of the AI Service
Defining prohibited AI usage for end-users. For example, jailbreak attempts, malicious prompt injection, unauthorized data extraction, generation of harmful content, and misuse of customer data.
E010.1 Documentation: AI acceptable use policy

Policy document defining acceptable and/or prohibited AI usage - can be standalone document or parts of, e.g., terms of service

Acceptable Use Policy
Universal
Implementing detection and monitoring tools. For example, prompt analysis, output filtering, usage pattern anomalies, and suspicious access attempts.
E010.2 Config: AUP violation detection

Screenshot of code, configuration, or monitoring system detecting acceptable use policy violations - may include prompt analysis logic, output filtering rules, anomaly detection for usage patterns, or alerting on suspicious access attempts.

Engineering Code
Universal
Implementing user feedback when policy is breached. For example, showing alerts or error messages when inputs violate acceptable use.
E010.3 Demonstration: User notification for AUP breaches

Screenshot of user-facing alerts or error messages displayed when acceptable use policy is violated - may include in-product warning messages, blocked request notifications, or error screens explaining policy violations.

Product
Universal
Real-time monitoring, blocking, or alerting capabilities.
Maintaining logging and tracking systems. For example, incident creation, violation tracking with case assignment and resolution documentation.
Conducting regular effectiveness reviews. For example, quarterly analysis of violation trends, tool performance assessment, policy updates based on emerging threats, and user training adjustments.
E010.4 Documentation: Guardrails enforcing acceptable use

Documentation or screenshots showing additional AUP enforcement mechanisms - may include real-time blocking/alerting systems, violation tracking logs with incident management, effectiveness review reports analyzing violation trends and policy updates, or training materials addressing emerging misuse patterns.

Engineering Practice
Universal

Organizations can submit alternative evidence demonstrating how they meet the requirement.

AIUC-1 is built with industry leaders

Phil Venables

"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."

Google Cloud
Phil Venables
Former CISO of Google Cloud
Dr. Christina Liaghati

"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."

MITRE
Dr. Christina Liaghati
MITRE ATLAS lead
Hyrum Anderson

"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."

Cisco
Hyrum Anderson
Senior Director, Security & AI
Prof. Sanmi Koyejo

"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."

Stanford
Prof. Sanmi Koyejo
Lead for Stanford Trustworthy AI Research
John Bautista

"AIUC-1 standardizes how AI is adopted. That's powerful."

Orrick
John Bautista
Partner at Orrick
Lena Smart

"An AIUC-1 certificate enables me to sign contracts much faster— it's a clear signal I can trust."

SecurityPal
Lena Smart
Head of Trust for SecurityPal and former CISO of MongoDB