AIUC-1
Context
IntroductionCertificate overview
Framework comparisons
ChangelogAIUC-1 ConsortiumProvide input on AIUC-1Contact
Standard
A. Data & Privacy
B. Security
C. Safety
Define AI risk taxonomyConduct pre-deployment testingPrevent harmful outputsPrevent out-of-scope outputsPrevent customer-defined high risk outputsPrevent output vulnerabilitiesFlag high risk outputs for human reviewMonitor AI risk categoriesEnable real-time feedback and interventionThird-party testing for harmful outputsThird-party testing for out-of-scope outputsThird-party testing for customer-defined risk
D. Reliability
E. Accountability
F. Society
Certification
AIUC-1 certification Scoping Accredited auditors FAQ
Evidence overview
AIUC-1

Share your details and let us know how you hope to use AIUC-1

I am interested in...

The Security, Safety, and Reliability standard for AI agents

Stay up to date with AIUC-1

AIUC-1
AIUC-1.COM

© 2026.AIUC

OverviewChangelogConsortium

LEGAL

Privacy PolicyTerms of Service
AIUC-1 Standard
→
C. Safety
→
C005. Prevent customer-defined high risk outputs
C005

Prevent customer-defined high risk outputs

Implement safeguards or technical controls to prevent additional high risk outputs as defined in risk taxonomy

Keywords

High-Risk OutputsRisk TaxonomyTechnical Controls

Application

Mandatory

Frequency

Every 12 months

Type

Preventative

Crosswalks

EU AI Act
Article 9: Risk Management System
NIST AI RMF
MANAGE 1.4: Residual risk documentation
OWASP Top 10
LLM05:25 - Improper Output Handling
CSA AICM
GRC-09: Acceptable Use of the AI Service
LOG-15: Output Monitoring
TVM-11: Guardrails
STA-10: Primary Service and Contractual Agreement
OWASP AIVSS
Agent Goal and Instruction Manipulation
IBM AI Risk Atlas
IBM 60: Output - Harmful output
IBM 61: Output - Harmful code generation
Cisco AI Security Framework
AITech-2.1: Jailbreak
AITech-15.1: Harmful Content

Control activities

Typical evidence

Implementing detection and blocking mechanisms aligned with organizational risk taxonomy. For example, deploying filtering based on defined risk categories and severity thresholds.
Implementing response actions for detected risks. For example, blocking high-severity outputs, flagging medium-risk content for review, logging violations for monitoring and analysis.
C005.1 Config: Risk detection and response

Filtering rules, system configuration, or code showing detection logic mapped to AI risk taxonomy categories and corresponding response actions per severity level - may include risk classifiers with block/flag/log rules, content moderation API configuration defining actions by risk type, or defensive prompting.

Category

Technical Implementation
Eng: LLM output filtering logic
Universal
Establishing escalation procedures for flagged high-risk content. For example, defining when human review is required and establishing approval workflows for edge cases.
C005.2 Documentation: Human review workflows

Documentation or workflow configuration showing human review and escalation procedures for flagged content - may include runbook defining escalation criteria and review SLAs, workflow diagram showing approval process, or ticketing system configuration (Jira, Linear) with content review queues and assignment rules.

Category

Technical Implementation
Engineering Practice
Universal
Implementing automated real-time interventions. For example, blocking or modifying outputs based on severity.
C005.3 Config: Automated response mechanisms

Code or system configuration showing automated response mechanisms - may include logic blocking or modifying outputs based on risk scores, or dynamic warning messages triggered by content flags.

Category

Technical Implementation
Engineering Code
Universal

Organizations can submit alternative evidence demonstrating how they meet the requirement.

AIUC-1 is built with industry leaders

Phil Venables

"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."

Google Cloud
Phil Venables
Former CISO of Google Cloud
Dr. Christina Liaghati

"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."

MITRE
Dr. Christina Liaghati
MITRE ATLAS lead
Hyrum Anderson

"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."

Cisco
Hyrum Anderson
Senior Director, Security & AI
Prof. Sanmi Koyejo

"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."

Stanford
Prof. Sanmi Koyejo
Lead for Stanford Trustworthy AI Research
John Bautista

"AIUC-1standardizes how AI is adopted. That's powerful."

Orrick
John Bautista
Partner at Orrick
Lena Smart

"An AIUC-1certificate enables me to sign contracts much faster— it's a clear signal I can trust."

SecurityPal
Lena Smart
Head of Trust for SecurityPal and former CISO of MongoDB