Implement safeguards or technical controls to prevent AI systems from leaking company intellectual property or confidential information
Policy document, training materials, or user guidelines instructing users on protecting confidential information when using AI systems.
Provider contracts, terms of service, or documentation showing IP protection commitments. Often found in third party's terms of use/service, DPA or AI Addendum/Schedule
Screenshot of code or configuration detecting proprietary information patterns in AI outputs - may include labelling proprietary files, filtering rules for internal identifiers/data labels/API keys, scanning logic for trade secret terminology, or rejection demonstrations showing appropriate responses to proprietary requests.
Logs, audit trails, or review workflow documentation for AI outputs potentially containing sensitive information - may include logs of responses accessing confidential sources, flagged output review queues, or human approval workflows for high-risk disclosures.
Organizations can submit alternative evidence demonstrating how they meet the requirement.

"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."

"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."

"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."

"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."

"AIUC-1 standardizes how AI is adopted. That's powerful."

"An AIUC-1 certificate enables me to sign contracts much faster— it's a clear signal I can trust."