AIUC-1
Context
IntroductionCertificate overview
Framework comparisons
ChangelogAIUC-1 ConsortiumProvide input on AIUC-1Contact
Standard
A. Data & Privacy
B. Security
C. Safety
Define AI risk taxonomyConduct pre-deployment testingPrevent harmful outputsPrevent out-of-scope outputsPrevent customer-defined high risk outputsPrevent output vulnerabilitiesFlag high risk outputs for human reviewMonitor AI risk categoriesEnable real-time feedback and interventionThird-party testing for harmful outputsThird-party testing for out-of-scope outputsThird-party testing for customer-defined risk
D. Reliability
E. Accountability
F. Society
Certification
AIUC-1 certification Scoping Accredited auditors FAQ
Evidence overview
AIUC-1

Share your details and let us know how you hope to use AIUC-1

I am interested in...

The Security, Safety, and Reliability standard for AI agents

Stay up to date with AIUC-1

AIUC-1
AIUC-1.COM

© 2026.AIUC

OverviewChangelogConsortium

LEGAL

Privacy PolicyTerms of Service
AIUC-1 Standard
→
C. Safety
→
C002. Conduct pre-deployment testing
C002

Conduct pre-deployment testing

Conduct internal testing of AI systems prior to deployment across risk categories for system changes requiring formal review or approval

Keywords

Internal TestingPre-Deployment Testing

Application

Mandatory

Frequency

Every 12 months

Type

Preventative

Crosswalks

MITRE ATLAS
AML-M0016: Vulnerability Scanning
EU AI Act
Article 9: Risk Management System
Article 27: Fundamental Rights Impact Assessment for High-Risk AI Systems
ISO 42001
A.6.2.5: AI system deployment
A.6.2.4: AI system verification and validation
NIST AI RMF
GOVERN 4.3: Testing and incident sharing
MANAGE 1.1: Purpose achievement
MAP 4.2: Internal risk controls
MEASURE 2.1: TEVV documentation
MEASURE 2.3: Performance demonstration
MEASURE 2.5: Validity and reliability
MEASURE 4.3: Performance tracking
CSA AICM
AIS-05: Application Security Testing
AIS-06: Secure Application Deployment
AIS-07: Application Vulnerability Remediation
AIS-12: Source Code Management
CCC-02: Quality Testing
AIS-04: Secure Application Development Lifecycle
TVM-05: External Library Vulnerabilities
IBM AI Risk Atlas
IBM 11: Agentic AI - Incomplete AI agent evaluation
IBM 84: Non-Technical - Unrepresentative risk testing
IBM 85: Non-Technical - Incorrect risk testing
IBM 86: Non-Technical - Lack of testing diversity
Cisco AI Security Framework
AITech-15.1: Harmful Content
CO AI Act
6-1-1702: Developer Duties
6-1-1703: Deployer Duties
CA SB 53
22757.12: Transparency & Reporting Obligations

Control activities

Typical evidence

Conducting pre-deployment testing with documented results and identified issues. For example, structured hallucination testing, adversarial prompting, safety unit tests, and scenario-based walkthroughs.
Completing risk assessments of identified issues before system deployment. For example, potential impact analysis, mitigation strategies, and residual risk evaluation.
Obtaining approval sign-offs from designated accountable. For example, documented rationale for approval decisions and maintained records for review purposes.
C002.1 Documentation: Pre-deployment test and approval records

Test results with identified issues and severity ratings, risk assessment with mitigation decisions, and approval sign-offs with rationale - may be combined in deployment gate documentation or provided as separate documents (e.g., test suite outputs from GitHub Actions/pytest, Jira/Linear tickets with risk assessment and approval, staging environment test reports, deployment checklist with sign-offs).

Category

Technical Implementation
Engineering Practice
Universal
Integrating AI system testing into established software development lifecycle (SDLC) gates. For example, including threat modelling and risk evaluation during design phases, requiring risk evaluation and sign-off at staging or pre-production milestones, aligning with CI/CD or MLOps pipelines, and documenting test artefacts in shared repositories."
C002.2 Config: SDLC integration

CI/CD pipeline configuration or workflow showing AI testing integrated as deployment gate - may include GitHub Actions/Jenkins/GitLab CI config files requiring test passage, pull request templates with testing checklists, or branch protection rules enforcing pre-deployment validation.

Category

Technical Implementation
Engineering Practice
Universal
Implementing pre-deployment vulnerability scanning of AI artifacts and dependencies. For example, scanning AI models and ML libraries for security vulnerabilities, validating runtime behavior for unsafe operations, and analyzing outputs for harmful content before deployment.
C002.3 Documentation: Vulnerability scan results

Security scanning tools or CI/CD pipeline showing vulnerability analysis of AI artifacts and dependencies - may include GitHub/GitLab security tab with dependency alerts, Snyk or Dependabot vulnerability findings, pip-audit or safety check terminal output showing CVE scans, model file scanning results, or CI/CD logs showing security scan execution.

Category

Technical Implementation
Engineering Tooling
Universal

Organizations can submit alternative evidence demonstrating how they meet the requirement.

AIUC-1 is built with industry leaders

Phil Venables

"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."

Google Cloud
Phil Venables
Former CISO of Google Cloud
Dr. Christina Liaghati

"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."

MITRE
Dr. Christina Liaghati
MITRE ATLAS lead
Hyrum Anderson

"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."

Cisco
Hyrum Anderson
Senior Director, Security & AI
Prof. Sanmi Koyejo

"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."

Stanford
Prof. Sanmi Koyejo
Lead for Stanford Trustworthy AI Research
John Bautista

"AIUC-1standardizes how AI is adopted. That's powerful."

Orrick
John Bautista
Partner at Orrick
Lena Smart

"An AIUC-1certificate enables me to sign contracts much faster— it's a clear signal I can trust."

SecurityPal
Lena Smart
Head of Trust for SecurityPal and former CISO of MongoDB