AIUC-1
Context
IntroductionCertificate overview
Framework comparisons
ChangelogAIUC-1 ConsortiumProvide input on AIUC-1Contact
Standard
A. Data & Privacy
B. Security
C. Safety
D. Reliability
E. Accountability
AI failure plan for security breachesAI failure plan for harmful outputsAI failure plan for hallucinationsAssign accountabilityDocument data storage securityConduct vendor due diligence[Retired] Document system change approvalsReview internal processesMonitor third-party accessEstablish AI acceptable use policyRecord processing locationsDocument regulatory complianceImplement quality management system[Retired] Share transparency reportsLog AI system activityImplement AI disclosure mechanismsDocument system transparency policy
F. Society
Certification
AIUC-1 certification Scoping Accredited auditors FAQ
Evidence overview
AIUC-1

Share your details and let us know how you hope to use AIUC-1

I am interested in...

The Security, Safety, and Reliability standard for AI agents

Stay up to date with AIUC-1

AIUC-1
AIUC-1.COM

© 2026.AIUC

OverviewChangelogConsortium

LEGAL

Privacy PolicyTerms of Service
AIUC-1 Standard
→
E. Accountability
→
E014. [Retired] Share transparency reports
E014

[Retired] Share transparency reports

Merged with E017 - see changelog (Q1 2026 update)

Keywords

Transparency

Application

Optional

Frequency

Every 12 months

Type

Preventative

Crosswalks

EU AI Act
Article 11: Technical Documentation
ISO 42001
A.6.2.7: AI system technical documentation
A.8.2: System documentation and information for users
A.8.5: Information for interested parties
7.4: Communication
NIST AI RMF
MANAGE 4.2: Continual improvement
MANAGE 4.3: Incident communication
MAP 1.6: System requirements
MAP 5.2: Stakeholder engagement
MEASURE 2.8: Transparency and accountability
MEASURE 2.9: Model explanation
MEASURE 4.2: Trustworthiness validation
CSA AICM
GRC-14: Explainability Evaluation
TVM-09: Vulnerability Management Reporting
TVM-10: Vulnerability Management Metrics
A&A-06: Remediation
LOG-13: Failures and Anomalies Reporting
LOG-10: Encryption Monitoring and Reporting
SEF-05: Incident Response Metrics
SEF-07: Security Breach Notification

Control activities

Typical evidence

This requirement was merged into E017 at the Q1, 2026 standard update. See [aiuc-1.com/changelog](http://aiuc-1.com/changelog) for more information

E014.1 Retired

Merged with E017 - see changelog (Q1 2026 update)

Category

Universal

Organizations can submit alternative evidence demonstrating how they meet the requirement.

AIUC-1 is built with industry leaders

Phil Venables

"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."

Google Cloud
Phil Venables
Former CISO of Google Cloud
Dr. Christina Liaghati

"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."

MITRE
Dr. Christina Liaghati
MITRE ATLAS lead
Hyrum Anderson

"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."

Cisco
Hyrum Anderson
Senior Director, Security & AI
Prof. Sanmi Koyejo

"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."

Stanford
Prof. Sanmi Koyejo
Lead for Stanford Trustworthy AI Research
John Bautista

"AIUC-1standardizes how AI is adopted. That's powerful."

Orrick
John Bautista
Partner at Orrick
Lena Smart

"An AIUC-1certificate enables me to sign contracts much faster— it's a clear signal I can trust."

SecurityPal
Lena Smart
Head of Trust for SecurityPal and former CISO of MongoDB