See the high-level crosswalk to the Colorado AI Act sections here.
This crosswalk is provided for informational purposes only and does not constitute legal advice. Sections with no operative analog to AIUC-1 (e.g. addressing definitions, enforcement mechanisms, regulatory infrastructure, and legislative safe harbors) have been omitted from crosswalk mappings. Organizations should consult qualified legal counsel to determine their specific compliance obligations under the Colorado AI Act.
6-1-1702(1) Developer duty of care
Developers must use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses of their high-risk AI system.
6-1-1702(2)–(3) Developer documentation to deployers
Developers must provide deployers with: training data type summaries; known limitations and discrimination risks; pre-deployment evaluation methodology; data governance measures; purpose of system and intended outputs; mitigation measures taken; and usage/monitoring instructions. Must also supply documentation sufficient for deployers to complete their own impact assessment (e.g. model cards, dataset cards).
6-1-1702(4) Public disclosure of risk management summary
Developers must publicly post on their website a summary of the types of high-risk AI systems they offer and how they manage discrimination risks. Must be updated within 90 days of any intentional and substantial modification.
6-1-1702(5) Disclosure of discovered discrimination risks
Developers must disclose to the AG and all known deployers any discovered or credibly reported algorithmic discrimination risk within 90 days of discovery.
6-1-1703(1) Deployer duty of care
Deployers must use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination when deploying a high-risk AI system.
6-1-1703(2) Deployer risk management policy & program
Deployers must implement a documented, iterative risk management program identifying, documenting, and mitigating discrimination risks. Must be calibrated to deployer size, system complexity, and data sensitivity. NIST AI RMF and ISO 42001 are recognised reference frameworks.
6-1-1703(3)(a)-(f) Deployer impact assessment
Deployers must complete a structured impact assessment at first deployment, annually, and within 90 days of any intentional and substantial modification. Must cover: intended purpose; deployment purpose; discrimination risk analysis; data categories and outputs; customisation data; performance metrics; transparency measures; and post-deployment monitoring plan. Records retained for 3 years after final deployment.
6-1-1703(3)(g) Annual anti-discrimination review
Deployers must review each deployed high-risk AI system at least annually to confirm it is not causing algorithmic discrimination.
6-1-1703(4)(a) Pre-decision consumer notification
Before a consequential decision is made, deployers must notify the affected consumer that AI is in use, disclose the system's purpose and decision nature, provide deployer contact information, and inform the consumer of any applicable profiling opt-out rights.
6-1-1703(4)(b) Adverse decision explanation & correction rights
For adverse consequential decisions, deployers must provide: the principal reasons (including AI's contribution, data type, and data source); an opportunity to correct incorrect personal data; and an opportunity to appeal with human review where technically feasible.
6-1-1703(5) Public statement on deployed systems
Deployers must publicly post a summary of the high-risk AI systems they deploy and their approach to managing discrimination risks. Must be updated within 90 days of any intentional and substantial modification.
6-1-1706 & 6-1-1707 Enforcement & AG rulemaking authority
The AG has exclusive enforcement authority. Violations are unfair trade practices under the Colorado Consumer Protection Act. Compliance with a recognised AI risk management framework (NIST AI RMF, ISO 42001, or AG-designated equivalent) is an affirmative defence. AG has rulemaking authority to promulgate implementing regulations.