AIUC-1
Context
IntroductionCertificate overview
Framework comparisons
ChangelogAIUC-1 ConsortiumProvide input on AIUC-1Contact
Standard
A. Data & Privacy
B. Security
C. Safety
D. Reliability
E. Accountability
AI failure plan for security breachesAI failure plan for harmful outputsAI failure plan for hallucinationsAssign accountabilityDocument data storage securityConduct vendor due diligence[Retired] Document system change approvalsReview internal processesMonitor third-party accessEstablish AI acceptable use policyRecord processing locationsDocument regulatory complianceImplement quality management system[Retired] Share transparency reportsLog AI system activityImplement AI disclosure mechanismsDocument system transparency policy
F. Society
Certification
AIUC-1 certification Scoping Accredited auditors FAQ
Evidence overview
AIUC-1

Share your details and let us know how you hope to use AIUC-1

I am interested in...

The Security, Safety, and Reliability standard for AI agents

Stay up to date with AIUC-1

AIUC-1
AIUC-1.COM

© 2026.AIUC

OverviewChangelogConsortium

LEGAL

Privacy PolicyTerms of Service
AIUC-1 Standard
→
E. Accountability
→
E015. Log AI system activity
E015

Log AI system activity

Maintain logs of AI system processes, actions, and agent outputs where permitted to support incident investigation, auditing, and explanation of AI system behavior

Keywords

ExplainabilityLogs

Application

Mandatory

Frequency

Every 12 months

Type

Detective

Crosswalks

MITRE ATLAS
AML-M0024: AI Telemetry Logging
EU AI Act
Article 12: Record-Keeping
Article 19: Automatically Generated Logs
ISO 42001
A.6.2.8: AI system recording of event logs
NIST AI RMF
MEASURE 2.4: Production monitoring
MEASURE 2.8: Transparency and accountability
OWASP Top 10
LLM10:25 - Unbounded Consumption
CSA AICM
LOG-01: Logging and Monitoring Policy and Procedures
LOG-03: Security Monitoring and Alerting
LOG-07: Logging Scope
LOG-08: Log Records
LOG-09: Log Protection
LOG-10: Encryption Monitoring and Reporting
LOG-11: Transaction / Activity Logging
LOG-13: Failures and Anomalies Reporting
LOG-14: Input Monitoring
LOG-15: Output Monitoring
MDS-10: Model Continuous Monitoring
LOG-04: Audit Logs Access and Accountability
LOG-05: Audit Logs Monitoring and Response
LOG-06: Clock Synchronization
LOG-12: Access Control Logs
SEF-05: Incident Response Metrics
SEF-07: Security Breach Notification
OWASP AIVSS
Agent Untraceability
IBM AI Risk Atlas
IBM 1: Agentic AI - Unexplainable and untraceable actions
IBM 13: Agentic AI - Lack of AI agent transparency
IBM 14: Agentic AI - Reproducibility
IBM 15: Agentic AI - Accountability of AI agent actions
Cisco AI Security Framework
AITech-11.2: Model-Selective Evasion
AITech-16.1: Eavesdropping
AITech-18.2: Malicious Workflows

Control activities

Typical evidence

Capturing system activity details to support incident investigation and behavior explanation. For example, logging inputs, processing steps, outputs, and metadata for AI systems.
E015.1 Config: Logging implementation

Logging code or configuration showing what system activity is captured - may include code logging inputs and outputs, logging configuration file specifying what to log, or example log entries showing captured information (timestamps, inputs, outputs, user actions).

Category

Technical Implementation
Logs
Universal
Implementing log storage with appropriate retention periods, access controls, and data sanitation to support auditing and incident response.
E015.3 Config: Log storage

Log storage system showing retention policies, access controls and sanitation practices - may include log management platform (Datadog, Splunk, CloudWatch) with retention period settings and PII-masking, access control configuration showing who can view logs, or storage settings with automatic deletion rules.

Category

Technical Implementation
LogsEngineering Tooling
Universal
Implementing technical controls to ensure logs are tamper-evident and independently verifiable. For example, ensuring that captured records cannot be modified or deleted after creation, ensuring sequence integrity so that gaps, omissions, and reordering are detectable during incident investigation or audit.
E015.4 Config: Log integrity protection

Log immutability controls - for example, write-once-read-many (WORM) storage configuration, cryptographic hashing of log entries, append-only database settings, or third-party log management platform features.

Category

Technical Implementation
LogsEngineering Code
Universal
Capturing full execution chains of agentic workflows to support investigation of agent-specific incidents. For example, logging agent provenance metadata, tool call parameters and results, sub-agent delegations and their outcomes, approval/authorization events (e.g., human-in-the-loop approvals), and reasoning traces where available.
E015.2 Config: AI agent logging implementation

Logging code or configuration demonstrating agent execution logging - may include log fields capturing agent provenance metadata per execution (e.g. agent type identifier, creator or deployment origin); structured log entries capturing tool call parameters and their results; delegation chain records showing sub-agent handoffs with identity, task context, and outcome at each step; approval/authorization records linked to execution (e.g., approver identity, timestamp, decision outcome); or reasoning trace output from the agent framework.

Category

Technical Implementation
Logs
Universal

Organizations can submit alternative evidence demonstrating how they meet the requirement.

AIUC-1 is built with industry leaders

Phil Venables

"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."

Google Cloud
Phil Venables
Former CISO of Google Cloud
Dr. Christina Liaghati

"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."

MITRE
Dr. Christina Liaghati
MITRE ATLAS lead
Hyrum Anderson

"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."

Cisco
Hyrum Anderson
Senior Director, Security & AI
Prof. Sanmi Koyejo

"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."

Stanford
Prof. Sanmi Koyejo
Lead for Stanford Trustworthy AI Research
John Bautista

"AIUC-1standardizes how AI is adopted. That's powerful."

Orrick
John Bautista
Partner at Orrick
Lena Smart

"An AIUC-1certificate enables me to sign contracts much faster— it's a clear signal I can trust."

SecurityPal
Lena Smart
Head of Trust for SecurityPal and former CISO of MongoDB