Document which AI system changes across the development & deployment lifecycle require formal review or approval, assign a lead accountable for each, and document their approval with supporting evidence
Documentation or policy defining which AI system changes require approval with assigned accountable leads, and approval records showing sign-offs with supporting evidence. Can be a change management policy, overview table in e.g. Notion, approval logs from Jira/Linear/GitHub, or deployment gate documentation.
Screenshot of code signing configuration, CI/CD pipeline requiring signed artifacts, or verification process for AI components - may include model signing process, signature verification in deployment pipeline, artifact registry showing signed models/libraries, or policy enforcement blocking unsigned components from production.
Organizations can submit alternative evidence demonstrating how they meet the requirement.

"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."

"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."

"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."

"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."

"AIUC-1 standardizes how AI is adopted. That's powerful."

"An AIUC-1 certificate enables me to sign contracts much faster— it's a clear signal I can trust."