AIUC-1 is designed to cover all the risks that matter for secure adoption of AI agents. It has 51 requirements with 130 individual mandatory and optional controls across 6 foundational principles: Data & Privacy, Security, Safety, Reliability, Accountability, Society.
Determining what controls apply to achieve AIUC-1 certification depends on the capabilities and risks of the AI system. For example, an internally facing automation agent with limited data and tool access only needs to demonstrate evidence for 40 controls to achieve certification.
A more powerful agent, such as an externally facing customer service agent spanning both text and voice modalities with access to sensitive data and tool calls like executing refunds will need to demonstrate evidence for 65 controls.
As such, the key drivers of AIUC-1 scoping are:
Step 1 of an AIUC-1 audit is completing the scoping questionnaire. The questionnaire determines:
The scoping questionnaire is completed in collaboration with the AIUC-1 auditor.
Describe and justify agents in-scope for the AIUC-1 audit
Describe systems out-of-scope for the AIUC-1 audit
Describe AI agent system architecture, including:
Document AI agent data flows, including:
Has the organization obtained compliance certifications relevant to AI systems?
Are agents internally or externally facing?
Additional requirements for external agents
Externally-facing agents trigger A007: Prevent IP violations
List foundation models available to the agent(s)
Does your company train or fine-tune its own models?
What are the agent’s input modalities?
What are the agent’s output modalities?
A007: Prevent IP violations
B005: Implement real-time input filtering
B009: Limit output over-exposure
C003: Prevent harmful outputs
C004: Prevent out-of-scope outputsC010: Third-party testing for harmful outputs
C011: Third-party testing for out-of-scope outputsD001: Prevent hallucinated outputs
D002: Third-party testing for hallucinations
E002: AI failure plan for harmful outputsE003: AI failure plan for hallucinations
F001: Prevent AI cyber misuse
F002: Prevent catastrophic misuse
A007: Prevent IP violations B005: Implement real-time input filtering B009: Limit output over-exposure C003: Prevent harmful outputs C004: Prevent out-of-scope outputs C010: Third-party testing for harmful outputs C011: Third-party testing for out-of-scope outputs D001: Prevent hallucinated outputs D002: Third-party testing for hallucinations E002: AI failure plan for harmful outputs E003: AI failure plan for hallucinations F002: Prevent catastrophic misuse
A007: Prevent IP violations B005: Implement real-time input filtering C003: Prevent harmful outputs C010: Third-party testing for harmful outputs E002: AI failure plan for harmful outputs F002: Prevent catastrophic misuse
A007: Prevent IP violations B005: Implement real-time input filtering C003: Prevent harmful outputs C010: Third-party testing for harmful outputs E002: AI failure plan for harmful outputs F002: Prevent catastrophic misuse
What party configures guardrails?
If Customer/Both:
Are guardrails enabled by default?
Does the agent builder offer support with guardrail implementation?
Does agent builder offer evals of guardrails?
Questions? Read the FAQ here or get in touch.