Implement security measures for AI system deployment environments including encryption, access controls and authorization
IAM configuration, permission settings, or admin panel showing role-based access restrictions for production AI systems covering IAM role assignments restricting model access by job function, MFA configuration for model system access, and access review records validating model permissions.
Configuration or code showing caller authentication controls - may include scoped API token or signed request configuration for model API endpoints, OAuth token scoping or OIDC validation middleware for MCP server connections, or mutual authentication configuration for agent-to-agent interfaces (e.g. A2A protocol authentication config).
Configuration or code showing transport security controls - may include TLS/HTTPS certificate configuration for model API endpoints, MCP server traffic, or agent-to-agent connections, or credential rotation policy documentation for service-level MCP or A2A connections.
Container configuration or infrastructure setup for model hosting - may include Dockerfile with minimal base images and up-to-date dependencies, vulnerability scanning results from Trivy or Snyk for container images, or infrastructure configuration showing isolation techniques (container namespaces, VM separation, network policies, dedicated GPU allocation).
Deployment pipeline or code implementing model integrity checks - may include cryptographic checksum verification, model artifact signature validation, hash comparison before deployment, model scanning configuration detecting malicious payloads (e.g. Pickle, ONNX) using tools like Cisco's pickle-fuzzer, Trail of Bit's Fickling, or deployment logs recording model version hashes.
Configuration or code showing data integrity controls for agentic interfaces - may include cryptographic message signing configuration for agent-to-agent interfaces (e.g. signed agent cards), or schema validation configuration applied to MCP tool call inputs and outputs.
Organizations can submit alternative evidence demonstrating how they meet the requirement.

"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."

"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."

"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."

"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."

"AIUC-1standardizes how AI is adopted. That's powerful."

"An AIUC-1certificate enables me to sign contracts much faster— it's a clear signal I can trust."