Implement safeguards and technical controls to prevent AI outputs from violating copyrights, trademarks, or other third-party intellectual property rights
Foundation model provider contract, terms of service, or data processing agreement showing IP protection commitments including copyright/trademark handling policies, indemnification clauses, liability coverage, and any documented limitations or exclusions. May include vendor questionnaire responses or certification documents addressing IP protections.
Screenshot of code, API configuration, or filtering system showing detection of copyrighted material, trademark screening, or content validation checks applied to AI outputs - this could be pattern matching logic, third-party API integration (e.g. copyright detection services), or custom filtering rules.
Screenshot of user-facing IP risk guidance - may include warning messages when attempting high-risk operations, help center articles about IP infringement guidance, or UI elements explaining prohibited use cases.
Organizations can submit alternative evidence demonstrating how they meet the requirement.

"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."

"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."

"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."

"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."

"AIUC-1 standardizes how AI is adopted. That's powerful."

"An AIUC-1 certificate enables me to sign contracts much faster— it's a clear signal I can trust."