AIUC-1
Context
IntroductionCertificate overview
Framework comparisons
ChangelogAIUC-1 ConsortiumProvide input on AIUC-1Contact
Standard
A. Data & Privacy
B. Security
C. Safety
Define AI risk taxonomyConduct pre-deployment testingPrevent harmful outputsPrevent out-of-scope outputsPrevent customer-defined high risk outputsPrevent output vulnerabilitiesFlag high risk outputs for human reviewMonitor AI risk categoriesEnable real-time feedback and interventionThird-party testing for harmful outputsThird-party testing for out-of-scope outputsThird-party testing for customer-defined risk
D. Reliability
E. Accountability
F. Society
Certification
AIUC-1 certification Scoping Accredited auditors FAQ
Evidence overview
AIUC-1

Share your details and let us know how you hope to use AIUC-1

I am interested in...

The Security, Safety, and Reliability standard for AI agents

Stay up to date with AIUC-1

AIUC-1
AIUC-1.COM

© 2026.AIUC

OverviewChangelogConsortium

LEGAL

Privacy PolicyTerms of Service
AIUC-1 Standard
→
C. Safety
→
C009. Enable real-time feedback and intervention
C009

Enable real-time feedback and intervention

Implement mechanisms to enable real-time user feedback collection, intervention and actioning mechanisms

Keywords

FeedbackInterventionUser ControlTransparency

Application

Optional

Frequency

Every 3 months

Type

Preventative

Crosswalks

EU AI Act
Article 14: Human Oversight
ISO 42001
A.8.3: External reporting
NIST AI RMF
GOVERN 3.2: Human-AI oversight
MAP 3.5: Human oversight
MEASURE 3.3: User feedback systems
CSA AICM
GRC-15: Human supervision
IBM AI Risk Atlas
IBM 4: Agentic AI - Over- or under-reliance on AI agents
IBM 20: Agentic AI - AI agents' impact on human agency
IBM 64: Output - Over- or under-reliance
CO AI Act
6-1-1703: Deployer Duties

Control activities

Typical evidence

Enabling user intervention capabilities. For example, providing mechanisms for users to pause, stop, or redirect system behavior, implementing feedback collection tools for users to report issues or concerns, ensuring technical controls persist across devices and interaction contexts.
Ensuring accessibility of feedback and intervention mechanisms. For example, adhering to WCAG 2.1 standards for color contrast, screen reader compatibility, keyboard navigation, and clear messaging for users with disabilities.
C009.1 Demonstration: User intervention mechanisms

Intervention controls (stop/pause/redirect buttons, feedback forms, issue reporting mechanisms) with accessibility features integrated (e.g. keyboard navigation, high contrast modes, screen reader labels)

Category

Technical Implementation
Product
Universal
Reviewing user feedback and intervention logs at regular intervals, analyzing findings using structured methodologies (e. g., categorizing by risk domain, frequency, and severity), and integrating corrective actions into product backlogs or compliance workflows, with records maintained for traceability.
C009.2 Documentation: User feedback & intervention reviews

Logs, reports, or dashboard showing review, analysis and actioning of user feedback and intervention patterns - may include feedback summary reports, intervention frequency analysis, categorization by risk domain, documentation of system changes made in response to patterns, or integration with product backlog/compliance workflows.

Category

Operational Practices
Internal processes
Universal

Organizations can submit alternative evidence demonstrating how they meet the requirement.

AIUC-1 is built with industry leaders

Phil Venables

"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."

Google Cloud
Phil Venables
Former CISO of Google Cloud
Dr. Christina Liaghati

"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."

MITRE
Dr. Christina Liaghati
MITRE ATLAS lead
Hyrum Anderson

"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."

Cisco
Hyrum Anderson
Senior Director, Security & AI
Prof. Sanmi Koyejo

"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."

Stanford
Prof. Sanmi Koyejo
Lead for Stanford Trustworthy AI Research
John Bautista

"AIUC-1standardizes how AI is adopted. That's powerful."

Orrick
John Bautista
Partner at Orrick
Lena Smart

"An AIUC-1certificate enables me to sign contracts much faster— it's a clear signal I can trust."

SecurityPal
Lena Smart
Head of Trust for SecurityPal and former CISO of MongoDB