AIUC-1
E017

Document system transparency policy

Establish a system transparency policy and maintain a repository of model cards, datasheets, and interpretability reports for major systems

Keywords
Transparency
System Cards
Application
Optional
Frequency
Every 12 months
Type
Preventative
Crosswalks
AML-M0023: AI Bill of Materials
AML-M0025: Maintain AI Dataset Provenance
Article 11: Technical Documentation
A.4.2: Resource documentation
A.4.3: Data resources
A.4.4: Tooling resources
A.4.5: System and computing resources
A.6.2.3: Documentation of AI system design and development
A.2.2: AI policy
A.2.4: Review of the AI policy
4.3: Determining the scope of the AI management system
5.2: AI policy
GOVERN 1.2: Trustworthy AI policies
GOVERN 1.6: AI system inventory
MAP 1.6: System requirements
MEASURE 2.8: Transparency and accountability
MEASURE 2.9: Model explanation
MEASURE 4.3: Performance tracking
GRC-13: Explainability Requirement
GRC-14: Explainability Evaluation
MDS-03: Model Documentation
MDS-04: Model Documentation Requirements
MDS-05: Model Documentation Validation
STA-16: Service Bill of Material (BOM)
DSP-20: Data Provenance and Transparency
Establishing a transparency policy defining documentation requirements for major AI systems. For example, specifying required documentation elements, establishing documentation standards.
E017.1 Documentation: Transparency policy

Policy document defining transparency documentation requirements - may include criteria for systems requiring documentation, required documentation elements (capabilities, limitations, use cases, risks), or documentation standards and templates.

Internal policies
Universal
Creating transparency documentation for major AI systems. For example, documenting system characteristics, data provenance, and model behavior for systems meeting documentation criteria.
E017.2 Documentation: Model cards and system documentation

Transparency documentation artifacts - may include model card (PDF, Markdown, web page) with system capabilities/limitations/intended use, datasheet showing training data sources and characteristics, interpretability report with example inputs/outputs and decision explanations, technical documentation describing model architecture and performance metrics, or an AI Bill of Materials (may follow CycloneDX or SPDX 3.0)

Engineering Code
Universal
Defining policies for sharing transparency documentation with external stakeholders. For example, establishing when reports are shared, specifying recipient categories, determining what information is disclosed to each stakeholder type.
Documenting sharing procedures including approval workflows, version control, and distribution tracking. For example, establishing approval requirements before external sharing, maintaining version control of shared documents, tracking which stakeholders received which versions.
E017.3 Documentation: Transparency report sharing policy

Policy document defining transparency sharing practices - may include sharing triggers, recipient categories with disclosure levels (regulators, customers, affected parties, public), or matrix mapping stakeholder types to shared documentation (model cards, datasheets, performance reports, incident summaries).

Internal processesInternal policies
Universal

Organizations can submit alternative evidence demonstrating how they meet the requirement.

AIUC-1 is built with industry leaders

Phil Venables

"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."

Google Cloud
Phil Venables
Former CISO of Google Cloud
Dr. Christina Liaghati

"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."

MITRE
Dr. Christina Liaghati
MITRE ATLAS lead
Hyrum Anderson

"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."

Cisco
Hyrum Anderson
Senior Director, Security & AI
Prof. Sanmi Koyejo

"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."

Stanford
Prof. Sanmi Koyejo
Lead for Stanford Trustworthy AI Research
John Bautista

"AIUC-1 standardizes how AI is adopted. That's powerful."

Orrick
John Bautista
Partner at Orrick
Lena Smart

"An AIUC-1 certificate enables me to sign contracts much faster— it's a clear signal I can trust."

SecurityPal
Lena Smart
Head of Trust for SecurityPal and former CISO of MongoDB