AIUC-1
Colorado AI Act (SB 24-205): Detailed crosswalk

AIUC-1 × Colorado AI Act (SB 24-205): Detailed crosswalk

See the high-level crosswalk to the Colorado AI Act sections here.

This crosswalk is provided for informational purposes only and does not constitute legal advice. Sections with no operative analog to AIUC-1 (e.g. addressing definitions, enforcement mechanisms, regulatory infrastructure, and legislative safe harbors) have been omitted from crosswalk mappings. Organizations should consult qualified legal counsel to determine their specific compliance obligations under the Colorado AI Act.

Colorado AI Act detailed crosswalk by subsection

CO SB 24-205 subsection

6-1-1702(1) Developer duty of care

Subsection summary

Developers must use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of their high-risk AI system.

Gap analysis
No Gap
AIUC-1 addresses harm prevention broadly and pursuing certification assumes reasonable care to protect consumers
CO SB 24-205 subsection

6-1-1702(2)–(3) Developer documentation to deployers

Subsection summary

Developers must provide deployers with: training data type summaries; known limitations and discrimination risks; pre-deployment evaluation methodology; data governance measures; purpose of system and intended outputs; mitigation measures taken; and usage/monitoring instructions. Must also supply documentation sufficient for deployers to complete their own impact assessment (e.g. model cards, dataset cards).

Gap analysis
No Gap
Met by organizations that establish an input data policy and document a system transparency policy
CO SB 24-205 subsection

6-1-1702(4) Public disclosure of risk management summary

Subsection summary

Developers must publicly post on their website a summary of the types of high-risk AI systems they offer and how they manage discrimination risks. Must be updated within 90 days of any intentional and substantial modification.

Relevant AIUC-1 requirements
Gap analysis
Partial Gap
AIUC-1 requires transparency reporting, but does prescribe a mandatory update trigger
CO SB 24-205 subsection

6-1-1702(5) Disclosure of discovered discrimination risks

Subsection summary

Developers must disclose to the AG and all known deployers any discovered or credibly reported algorithmic discrimination risk within 90 days of discovery.

Relevant AIUC-1 requirements
Gap analysis
Partial Gap
AIUC-1 requires an AI failure plan for harmful outputs, but does not require external disclosure of discovered discrimination risks to a regulator or downstream deployers within a defined timeframe
CO SB 24-205 subsection

6-1-1703(1) Deployer duty of care

Subsection summary

Deployers must use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination when deploying a high-risk AI system.

Gap analysis
No Gap
AIUC-1 addresses harm prevention broadly and pursuing certification assumes reasonable care to protect consumers
CO SB 24-205 subsection

6-1-1703(2) Deployer risk management policy & program

Subsection summary

Deployers must implement a documented, iterative risk management program identifying, documenting, and mitigating discrimination risks. Must be calibrated to deployer size, system complexity, and data sensitivity. NIST AI RMF and ISO 42001 are recognised reference frameworks.

Gap analysis
No Gap
Met by organizations that define an AI risk taxonomy and implement a quality management system
CO SB 24-205 subsection

6-1-1703(3)(a)-(f) Deployer impact assessment

Subsection summary

Deployers must complete a structured impact assessment at first deployment, annually, and within 90 days of any intentional and substantial modification. Must cover: intended purpose; deployment purpose; discrimination risk analysis; data categories and outputs; customisation data; performance metrics; transparency measures; and post-deployment monitoring plan. Records retained for 3 years after final deployment.

Gap analysis
No Gap
Met by organizations that monitor AI risk categories and implement a quality management system
CO SB 24-205 subsection

6-1-1703(3)(g) Annual anti-discrimination review

Subsection summary

Deployers must review each deployed high-risk AI system at least annually to confirm it is not causing algorithmic discrimination.

Gap analysis
Partial Gap
AIUC-1 requires organizations monitor AI risk categories and undergo quarterly testing, which currently does not explicitly test for algorithmic discrimination
CO SB 24-205 subsection

6-1-1703(4)(a) Pre-decision consumer notification

Subsection summary

Before a consequential decision is made, deployers must notify the affected consumer that AI is in use, disclose the system's purpose and decision nature, provide deployer contact information, and inform the consumer of any applicable profiling opt-out rights.

Relevant AIUC-1 requirements
Gap analysis
Partial Gap
AIUC-1 requires AI disclosure mechanisms, but does not address advance-notice timing requirements or consumer profiling opt-out rights
CO SB 24-205 subsection

6-1-1703(4)(b) Adverse decision explanation & correction rights

Subsection summary

For adverse consequential decisions, deployers must provide: the principal reasons (including AI's contribution, data type, and data source); an opportunity to correct incorrect personal data; and an opportunity to appeal with human review where technically feasible.

Gap analysis
Partial Gap
AIUC-1 requires real-time feedback and intervention mechanisms, but does not address adverse-decision explanation requirements, individual data correction rights, or human appeal obligations
CO SB 24-205 subsection

6-1-1703(5) Public statement on deployed systems

Subsection summary

Deployers must publicly post a summary of the high-risk AI systems they deploy and their approach to managing discrimination risks. Must be updated within 90 days of any intentional and substantial modification.

Relevant AIUC-1 requirements
Gap analysis
Partial Gap
AIUC-1 requires transparency reporting, but does prescribe a mandatory update trigger
CO SB 24-205 subsection

6-1-1706 & 6-1-1707 Enforcement & AG rulemaking authority

Subsection summary

The AG has exclusive enforcement authority. Violations are unfair trade practices under the Colorado Consumer Protection Act. Compliance with a recognised AI risk management framework (NIST AI RMF, ISO 42001, or AG-designated equivalent) is an affirmative defence. AG has rulemaking authority to promulgate implementing regulations.

Relevant AIUC-1 requirements
Gap analysis
No Gap
Met by organizations that document and follow regulatory compliance
Last updated April 13, 2026.