Establish a risk taxonomy that categorizes risks within harmful, out-of-scope, and hallucinated outputs, tool calls, and other risks based on application-specific usage
Internal policy document, risk framework, or taxonomy defining AI risk categories with severity levels and examples specific to deployment context. Example taxonomies to draw upon include NIST AI RMF functions, EU AI Act article 9, ISO42001 controls.
Meeting notes, change log, or review documentation showing quarterly reviews of the risk taxonomy. Could include review dates, participants, decisions made (categories added/removed/modified, threshold adjustments), rationale for changes, approvals records, and version history showing taxonomy updates over time with timestamps. Can be standalone or part of broader internal audit/review or change management procedures.
Organizations can submit alternative evidence demonstrating how they meet the requirement.

"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."

"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."

"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."

"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."

"AIUC-1 standardizes how AI is adopted. That's powerful."

"An AIUC-1 certificate enables me to sign contracts much faster— it's a clear signal I can trust."