AIUC-1
B004

Prevent AI endpoint scraping

Implement safeguards to prevent probing or scraping of external AI endpoints

Keywords
Scraping
Probing
Rate Limiting
Query Quotas
Zero Trust
Application
Mandatory
Frequency
Every 12 months
Type
Preventative
Crosswalks
AML-M0003: Model Hardening
AML-M0004: Restrict Number of AI Model Queries
Article 15: Accuracy Robustness and Cybersecurity
MEASURE 2.7: Security and resilience
LLM02:25 - Sensitive Information Disclosure
LLM05:25 - Improper Output Handling
LLM08:25 - Vector and Embedding Weaknesses
LLM10:25 - Unbounded Consumption
AIS-10: API Security
UEM-01: Endpoint Devices Policy and Procedures
UEM-05: Endpoint Management
UEM-09: Anti-Malware Detection and Prevention
UEM-10: Software Firewall
UEM-11: Data Loss Prevention
UEM-14: Third-Party Endpoint Security Posture
TVM-06: Penetration Testing
Implementing systems distinguishing between high-volume legitimate usage and adversarial behavior. For example, using behavioral analytics and user profiling to calibrate detection thresholds and prevent false positives against trusted users.
B004.1 Config: Anomalous usage detection

Screenshot of anomaly detection system or configuration file - may include behavioral analytics dashboard (Datadog, Elastic, Splunk) with user scoring rules, rate limiting configuration with tier-based thresholds (config.yaml, API gateway settings), user allowlists or reputation tables, or code implementing session-based threshold logic.

Engineering ToolingEngineering Code
Universal
Implementing rate limiting and query restrictions. For example, establishing per-user quotas to prevent model extraction, blocking excessive query patterns, implementing progressive restrictions for suspicious behavior, or using economic disincentives for high-volume usage.
B004.2 Config: Rate limits

Screenshot of rate limiting configuration for API endpoints - may include per-user quota settings, query throttling rules, progressive restriction policies, WAF configuration (Cloudflare, AWS WAF, Azure Application Gateway) with blocking rules for excessive patterns, or pricing tier settings implementing usage-based cost increases.

Engineering Tooling
Universal
Conducting simulated external attack testing of AI endpoints. For example, performing automated attack simulations, testing endpoint protection effectiveness against high-volume and distributed attacks, and documenting methodologies appropriate to organizational threat profile.
B004.3 Report: External pentest of AI endpoints

Third-party penetration test report for AI endpoints including attack simulations tested (e.g. scraping attempts, brute force, reconnaissance), rate limiting and endpoint protection validation, distributed attack testing, test methodology, and findings on protection effectiveness.

Engineering Practice
Universal
Maintaining endpoint security through remediation. For example, tracking identified vulnerabilities, implementing protective measures based on testing outcomes, and regularly updating endpoint defenses and detection thresholds.
B004.4 Documentation: Vulnerability remediation

Screenshot of issue tracking system (GitHub, Jira, Linear) showing endpoint vulnerability lifecycle - must include vulnerability identification, remediation proposal, implementation, and production deployment with timestamps and approval records.

Engineering Practice
Universal

Organizations can submit alternative evidence demonstrating how they meet the requirement.

AIUC-1 is built with industry leaders

Phil Venables

"We need a SOC 2 for AI agents— a familiar, actionable standard for security and trust."

Google Cloud
Phil Venables
Former CISO of Google Cloud
Dr. Christina Liaghati

"Integrating MITRE ATLAS ensures AI security risk management tools are informed by the latest AI threat patterns and leverage state of the art defensive strategies."

MITRE
Dr. Christina Liaghati
MITRE ATLAS lead
Hyrum Anderson

"Today, enterprises can't reliably assess the security of their AI vendors— we need a standard to address this gap."

Cisco
Hyrum Anderson
Senior Director, Security & AI
Prof. Sanmi Koyejo

"Built on the latest advances in AI research, AIUC-1 empowers organizations to identify, assess, and mitigate AI risks with confidence."

Stanford
Prof. Sanmi Koyejo
Lead for Stanford Trustworthy AI Research
John Bautista

"AIUC-1 standardizes how AI is adopted. That's powerful."

Orrick
John Bautista
Partner at Orrick
Lena Smart

"An AIUC-1 certificate enables me to sign contracts much faster— it's a clear signal I can trust."

SecurityPal
Lena Smart
Head of Trust for SecurityPal and former CISO of MongoDB