AIUC-1
California SB 53 (Transparency in Frontier AI Act)

AIUC-1 × California SB 53 (Transparency in Frontier AI Act)

California's SB 53, the Transparency in Frontier AI Act (TFAIA), regulates frontier AI developers - those training models at scale above 10²⁶ FLOPs - with enhanced obligations for large frontier developers exceeding $500M in annual gross revenue. It requires covered developers to publish a Frontier AI Framework, issue transparency reports, report critical safety incidents to the Office of Emergency Services, and submit quarterly catastrophic risk assessments.

Given the current focus of the regulation, companies certifying against AIUC-1 are typically not considered in scope for the compliance obligation under CA SB 53.

The below crosswalk highlights the main obligations set out in CA SB 53 - and associated AIUC-1 requirements that pursue similar objectives. See the crosswalk to the detailed CA SB 53 subsections here.

This crosswalk is provided for informational purposes only and does not constitute legal advice. Sections with no operative analog to AIUC-1 (e.g. addressing definitions, enforcement mechanisms, regulatory infrastructure, and legislative safe harbors) have been omitted from crosswalk mappings. Organizations should consult qualified legal counsel to determine their specific compliance obligations under California SB 53.

California SB 53 crosswalk by section

CA SB 53 section

22757.12: Transparency & Reporting Obligations

Section summary

Requires large frontier developers to publish a Frontier AI Framework and transparency reports, and to submit quarterly catastrophic risk assessments to OES.

Gap analysis
Partial Gap
AIUC-1 addresses transparency reporting and internal process review but does not prescribe model weight security or mandatory government reporting cadences
CA SB 53 section

22757.13: Critical Safety Incident Reporting

Section summary

Requires all frontier developers to report critical safety incidents to OES within 15 days, and immediately to appropriate authorities when imminent harm is posed.

Relevant AIUC-1 requirements
Gap analysis
Partial Gap
AIUC-1 requires an AI failure plan for harmful outputs, but does not address mandatory government reporting windows or imminent-harm escalation triggers
Last updated April 13, 2026.