AIUC-1
Context
IntroductionCertificate overview
Framework comparisons
ChangelogAIUC-1 ConsortiumProvide input on AIUC-1Contact
Standard
A. Data & Privacy
B. Security
C. Safety
Define AI risk taxonomyConduct pre-deployment testingPrevent harmful outputsPrevent out-of-scope outputsPrevent customer-defined high risk outputsPrevent output vulnerabilitiesFlag high risk outputs for human reviewMonitor AI risk categoriesEnable real-time feedback and interventionThird-party testing for harmful outputsThird-party testing for out-of-scope outputsThird-party testing for customer-defined risk
D. Reliability
E. Accountability
F. Society
Certification
AIUC-1 certification Scoping Accredited auditors FAQ
Evidence overview
AIUC-1

Share your details and let us know how you hope to use AIUC-1

I am interested in...

The Security, Safety, and Reliability standard for AI agents

Stay up to date with AIUC-1

AIUC-1
AIUC-1.COM

© 2026.AIUC

OverviewChangelogConsortium

LEGAL

Privacy PolicyTerms of Service
AIUC-1 Standard
→
C. Safety
→
C001. Define AI risk taxonomy
C001

Define AI risk taxonomy

Establish a risk taxonomy based on system capabilites and deployment context

Keywords

Risk TaxonomySeverity Rating

Application

Mandatory

Frequency

Every 12 months

Type

Preventative

Crosswalks

EU AI Act
Article 9: Risk Management System
ISO 42001
A.5.2: AI system impact assessment process
A.5.3: Documentation of AI system impact assessments
A.5.4: Assessing AI system impact on individuals or groups of individuals
A.5.5: Assessing societal impacts of AI systems
4.1: Understanding the organization and its context
6.1.1: Actions to address risks and opportunities — General
6.1.2: AI risk assessment
6.1.3: AI risk treatment
6.1.4: AI system impact assessment
8.2: AI risk assessment
8.3: AI risk treatment
8.4: AI system impact assessment
NIST AI RMF
GOVERN 1.3: Risk management processes
GOVERN 1.4: Risk management governance
GOVERN 4.2: Risk documentation
GOVERN 6.1: Third-party risk policies
MANAGE 1.2: Risk prioritization
MANAGE 1.3: Risk response planning
MANAGE 1.4: Residual risk documentation
MAP 1.5: Risk tolerance
MAP 5.1: Impact assessment
MEASURE 1.1: Risk metrics selection
MEASURE 2.10: Privacy risk assessment
MEASURE 2.11: Fairness and bias
MEASURE 3.1: Emergent risk tracking
CSA AICM
A&A-05: Audit Management Process
A&A-06: Remediation
BCR-02: Risk Assessment and Impact Analysis
CEK-07: Encryption Risk Management
DSP-09: Data Protection Impact Assessment
GRC-02: Risk Management Program
MDS-11: Model Failure
CCC-03: Change Management Technology
MDS-12: Open Model Risk Assessment
IBM AI Risk Atlas
IBM 67: Output - Nonconsensual use
IBM 69: Output - Improper usage
IBM 83: Non-Technical - Incomplete usage definition
IBM 84: Non-Technical - Unrepresentative risk testing
IBM 85: Non-Technical - Incorrect risk testing
IBM 86: Non-Technical - Lack of testing diversity
CO AI Act
6-1-1702: Developer Duties
6-1-1703: Deployer Duties
CA SB 53
22757.12: Transparency & Reporting Obligations

Control activities

Typical evidence

Defining risk categories with severity levels and examples based on industry and deployment context. For example, classifying harmful outputs such as distressed outputs, angry responses, high-risk advice, offensive content, bias, and deception, identifying other high-risk use cases such as safety-critical instructions, legal recommendations, financial advice.
Aligning risk taxonomy with external frameworks and standards.
Establishing severity grading appropriate to organizational context and risk tolerance. For example, implementing consistent scoring methodology across risk categories, defining thresholds for flagging and human review.
C001.1 Documentation: AI risk taxonomy

Internal policy document, risk framework, or taxonomy defining AI risk categories with severity levels and examples specific to deployment context. Example taxonomies to draw upon include NIST AI RMF functions, EU AI Act article 9, ISO42001 controls.

Category

Operational Practices
Internal policies
Universal
Maintaining taxonomy currency with documented change management. For example, updating based on emerging threats or incidents.
C001.2 Documentation: Risk taxonomy reviews

Meeting notes, change log, or review documentation showing annual reviews of the risk taxonomy. Could include review dates, participants, decisions made (categories added/removed/modified, threshold adjustments), rationale for changes, approvals records, and version history showing taxonomy updates over time with timestamps. Can be standalone or part of broader internal audit/review or change management procedures.

Category

Operational Practices
Internal processes
Universal

Organizations can submit alternative evidence demonstrating how they meet the requirement.

AIUC-1 is built with industry leaders

Phil Venables

"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."

Google Cloud
Phil Venables
Former CISO of Google Cloud
Dr. Christina Liaghati

"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."

MITRE
Dr. Christina Liaghati
MITRE ATLAS lead
Hyrum Anderson

"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."

Cisco
Hyrum Anderson
Senior Director, Security & AI
Prof. Sanmi Koyejo

"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."

Stanford
Prof. Sanmi Koyejo
Lead for Stanford Trustworthy AI Research
John Bautista

"AIUC-1standardizes how AI is adopted. That's powerful."

Orrick
John Bautista
Partner at Orrick
Lena Smart

"An AIUC-1certificate enables me to sign contracts much faster— it's a clear signal I can trust."

SecurityPal
Lena Smart
Head of Trust for SecurityPal and former CISO of MongoDB