AIUC-1
C002

Conduct pre-deployment testing

Conduct internal testing of AI systems prior to deployment across risk categories for system changes requiring formal review or approval

Keywords
Internal Testing
Pre-Deployment Testing
Application
Mandatory
Frequency
Every 12 months
Type
Preventative
Crosswalks
AML-M0016: Vulnerability Scanning
Article 9: Risk Management System
Article 27: Fundamental Rights Impact Assessment for High-Risk AI Systems
A.6.2.5: AI system deployment
A.6.2.4: AI system verification and validation
GOVERN 4.3: Testing and incident sharing
MANAGE 1.1: Purpose achievement
MAP 4.2: Internal risk controls
MEASURE 2.1: TEVV documentation
MEASURE 2.3: Performance demonstration
MEASURE 2.5: Validity and reliability
MEASURE 4.3: Performance tracking
AIS-05: Application Security Testing
AIS-06: Secure Application Deployment
AIS-07: Application Vulnerability Remediation
AIS-12: Source Code Management
CCC-02: Quality Testing
AIS-04: Secure Application Development Lifecycle
TVM-05: External Library Vulnerabilities
Conducting pre-deployment testing with documented results and identified issues. For example, structured hallucination testing, adversarial prompting, safety unit tests, and scenario-based walkthroughs.
Completing risk assessments of identified issues before system deployment. For example, potential impact analysis, mitigation strategies, and residual risk evaluation.
Obtaining approval sign-offs from designated accountable. For example, documented rationale for approval decisions and maintained records for review purposes.
C002.1 Documentation: Pre-deployment test and approval records

Test results with identified issues and severity ratings, risk assessment with mitigation decisions, and approval sign-offs with rationale - may be combined in deployment gate documentation or provided as separate documents (e.g., test suite outputs from GitHub Actions/pytest, Jira/Linear tickets with risk assessment and approval, staging environment test reports, deployment checklist with sign-offs).

Engineering Practice
Universal
Integrating AI system testing into established software development lifecycle (SDLC) gates. For example, including threat modelling and risk evaluation during design phases, requiring risk evaluation and sign-off at staging or pre-production milestones, aligning with CI/CD or MLOps pipelines, and documenting test artefacts in shared repositories."
C002.2 Config: SDLC integration

CI/CD pipeline configuration or workflow showing AI testing integrated as deployment gate - may include GitHub Actions/Jenkins/GitLab CI config files requiring test passage, pull request templates with testing checklists, or branch protection rules enforcing pre-deployment validation.

Engineering Practice
Universal
Implementing pre-deployment vulnerability scanning of AI artifacts and dependencies. For example, scanning AI models and ML libraries for security vulnerabilities, validating runtime behavior for unsafe operations, and analyzing outputs for harmful content before deployment.
C002.3 Documentation: Vulnerability scan results

Screenshot of security scanning tools or CI/CD pipeline showing vulnerability analysis of AI artifacts and dependencies - may include GitHub/GitLab security tab with dependency alerts, Snyk or Dependabot vulnerability findings, pip-audit or safety check terminal output showing CVE scans, model file scanning results, or CI/CD logs showing security scan execution.

Engineering Tooling
Universal

Organizations can submit alternative evidence demonstrating how they meet the requirement.

AIUC-1 is built with industry leaders

Phil Venables

"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."

Google Cloud
Phil Venables
Former CISO of Google Cloud
Dr. Christina Liaghati

"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."

MITRE
Dr. Christina Liaghati
MITRE ATLAS lead
Hyrum Anderson

"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."

Cisco
Hyrum Anderson
Senior Director, Security & AI
Prof. Sanmi Koyejo

"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."

Stanford
Prof. Sanmi Koyejo
Lead for Stanford Trustworthy AI Research
John Bautista

"AIUC-1 standardizes how AI is adopted. That's powerful."

Orrick
John Bautista
Partner at Orrick
Lena Smart

"An AIUC-1 certificate enables me to sign contracts much faster— it's a clear signal I can trust."

SecurityPal
Lena Smart
Head of Trust for SecurityPal and former CISO of MongoDB