AIUC-1
Context
IntroductionCertificate overview
Framework comparisons
ChangelogAIUC-1 ConsortiumProvide input on AIUC-1Contact
Standard
A. Data & Privacy
B. Security
C. Safety
D. Reliability
E. Accountability
AI failure plan for security breachesAI failure plan for harmful outputsAI failure plan for hallucinationsAssign accountabilityDocument data storage securityConduct vendor due diligence[Retired] Document system change approvalsReview internal processesMonitor third-party accessEstablish AI acceptable use policyRecord processing locationsDocument regulatory complianceImplement quality management system[Retired] Share transparency reportsLog AI system activityImplement AI disclosure mechanismsDocument system transparency policy
F. Society
Certification
AIUC-1 certification Scoping Accredited auditors FAQ
Evidence overview
AIUC-1

Share your details and let us know how you hope to use AIUC-1

I am interested in...

The Security, Safety, and Reliability standard for AI agents

Stay up to date with AIUC-1

AIUC-1
AIUC-1.COM

© 2026.AIUC

OverviewChangelogConsortium

LEGAL

Privacy PolicyTerms of Service
AIUC-1 Standard
→
E. Accountability
→
E012. Document regulatory compliance
E012

Document regulatory compliance

Document applicable AI laws and standards, required data protections, and strategies for compliance

Keywords

RegulatoryEUNYNISTISOGDPR

Application

Mandatory

Frequency

Every 6 months

Type

Preventative

Crosswalks

EU AI Act
Article 16: Obligations of Providers of High-Risk AI Systems
Article 18: Documentation Keeping
Article 21: Cooperation with Competent Authorities
Article 22: Authorised Representatives of Providers of High-Risk AI Systems
Article 25: Responsibilities Along the AI Value Chain
Article 26: Obligations of Deployers of High-Risk AI Systems
Article 43: Conformity Assessment
Article 44: Certificates
Article 47: EU Declaration of Conformity
Article 48: CE Marking
Article 49: Registration
ISO 42001
A.2.3: Alignment with other organizational policies
A.8.5: Information for interested parties
10.2: Nonconformity and corrective action
NIST AI RMF
GOVERN 1.1: Legal and regulatory compliance
GOVERN 1.7: AI system decommissioning
MAP 1.1: Context understanding
MAP 4.1: Legal risk mapping
CSA AICM
A&A-04: Requirements Compliance
DSP-10: Sensitive Data Transfer
DSP-18: Disclosure Notification
GRC-07: Information System Regulatory Mapping
HRS-10: Non-Disclosure Agreements
IBM AI Risk Atlas
IBM 16: Agentic AI - AI agent compliance
IBM 32: Training Data - Data privacy rights alignment
IBM 36: Training Data - Data usage restrictions
IBM 89: Non-Technical - Legal accountability
CO AI Act
6-1-1706 & 6-1-1707: Enforcement & AG, Rulemaking Authority
CA SB 53
22757.12: Transparency & Reporting Obligations

Control activities

Typical evidence

Identifying relevant regulations. For example, data protection laws. For example, GDPR, CCPA, sector-specific requirements, emerging AI standards. For example, EU AI Act.
Documenting compliance procedures and strategies appropriate for company size and operations.
Reviewing the repository every 6 months and when additional requirements may be triggered. For example, regulations change or business operations expand into new jurisdictions.
E012.1 Documentation: Regulatory compliance reviews

Compliance register, assessment memo or review tickets (e.g. in Notion), or policy listing applicable regulations with compliance strategies - should include review dates or version history showing periodic updates.

Category

Legal Policies
Internal processes
Universal

Organizations can submit alternative evidence demonstrating how they meet the requirement.

AIUC-1 is built with industry leaders

Phil Venables

"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."

Google Cloud
Phil Venables
Former CISO of Google Cloud
Dr. Christina Liaghati

"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."

MITRE
Dr. Christina Liaghati
MITRE ATLAS lead
Hyrum Anderson

"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."

Cisco
Hyrum Anderson
Senior Director, Security & AI
Prof. Sanmi Koyejo

"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."

Stanford
Prof. Sanmi Koyejo
Lead for Stanford Trustworthy AI Research
John Bautista

"AIUC-1standardizes how AI is adopted. That's powerful."

Orrick
John Bautista
Partner at Orrick
Lena Smart

"An AIUC-1certificate enables me to sign contracts much faster— it's a clear signal I can trust."

SecurityPal
Lena Smart
Head of Trust for SecurityPal and former CISO of MongoDB