(and Why Uncertainty Is Growing as We Head into 2026)
Artificial intelligence isn’t just shaping the future—it’s already here. Whether it’s using ChatGPT to draft an email, scrolling past AI‑generated ads, or transitioning to smart tools at work, nearly everyone has experienced both the promise and pitfalls that come with it. Law firms, local businesses, and tech startups alike are finding new ways to harness AI’s power while also grappling with its risks. That’s where Colorado’s new law, SB24‑205, steps in with its aims to bring some clarity and accountability to how AI is used across the state.
If your law firm, company or startup develops or deploys artificial intelligence (AI) systems that play a significant role in consequential decisions affecting Colorado consumers and residents such as hiring, promotion, lending, housing, insurance, healthcare, education, credit, or government benefits, and legal services, your operations likely fall under one of the most comprehensive state-level AI regulations in the United States.
Enacted on May 17, 2024, the Colorado Artificial Intelligence Act SB 24-205, also known as the Anti-Discrimination in Artificial Intelligence Law (ADAI) is the nation’s first comprehensive state-level artificial intelligence consumer protection law. This legislation imposes groundbreaking requirements on both developers and deployers of high-risk AI systems. Its primary goal is to prevent algorithmic discrimination in decisions that materially impact consumers’ lives.
Originally set to take effect February 1, 2026, the law is now delayed until June 30, 2026, followed by a full-year mandatory cure period ending June 30, 2027. That five-month postponement via SB 25B-004, signed by Governor Jared Polis on August 28, 2025, plus the grace period should feel like breathing room. For most, it feels like the calm before the storm.
The Law in Plain Language
If your company builds or uses high-risk AI systems that make important decisions for consumers in Colorado, the new state law now imposes some of the toughest requirements in the country.
At its core, the ADAI requires both developers and deployers of these systems to take reasonable care to avoid algorithmic discrimination—meaning any unfair treatment or disparate impact based on race, gender, age, disability, or other stated protected characteristics.
A high-risk AI system is defined as any AI application that makes a consequential decision. The law zeroes in on eight critical areas which the legislature has identified as particularly consequential: education opportunities (admissions and scholarships), employment opportunities (hiring, promotion, and firing), financial services (credit and lending), healthcare services (diagnosis and treatment), housing (rentals and mortgages), insurance (coverage and pricing), government benefits, and legal services.
The law also draws a clear line between two important roles–developers and deployers.
- Developers: Companies that develop or significantly modify high-risk AI systems. Developers must give users detailed documentation about how the system works, its limitations, and any known risks. If they discover the AI could cause discrimination, they have 90 days to notify the Colorado Attorney General and all known users (without revealing trade secrets). For generative AI tools, additional rules require tracking training data, enabling detection of AI-generated content, and complying with copyright obligations; and
- Deployers: Companies that use high-risk AI systems in Colorado. Deployers have an even longer to-do list. They must implement and annually review a risk-management program, conduct yearly impact assessments (covering what the system does, what data it uses, known risks, and mitigation steps), and—starting with systems deployed after July 1, 2025—document any off-label uses. Before an AI decision affects a consumer, the company must disclose that AI is involved, explain its role, and make clear whether it was the sole basis for the outcome. If the AI comes to a decision that is adverse to the consumer, that individual is entitled to a plain-English explanation of the main reasons and information about opt-out rights under the Colorado Privacy Act. Any discovered discrimination must be reported to the Attorney General within 90 days.
Enforcement of the ADAI rests solely with the Colorado Attorney General and district attorneys—there is no private right of action meaning no private lawsuits are allowed under the statute (though related claims under other laws may still be possible). Violations are treated as deceptive trade practices, and fines can reach $20,000 per violation once any initial grace period expires.
In short, if you develop or use AI systems that touch consequential decisions for consumers in Colorado, now is the time to review your systems and processes. This law sets a new national benchmark, and many expect other states to follow.
Safe Harbor, Rebuttable Presumptions, and Defenses
While the ADAI imposes strict requirements on developers and deployers alike, the law also provides a rebuttable presumption that a high-risk AI system was used with reasonable care to avoid algorithmic discrimination if certain requirements are met. Developers qualify for a rebuttal presumption of reasonable care by providing required documentation on risks, training data, and mitigation to deployers, plus public summaries of their AI systems. Deployers of high-risk AI systems qualify by implementing risk management policies aligned with NIST AI Risk Management Framework or ISO/IEC 42001, conducting annual impact assessments, notifying consumers of AI use in consequential decisions, and disclosing possible risks. A qualifying risk management policy for a deployer should include a documented policy mapped to the NIST AI Risk Management Framework or the ISO/IEC 42001 standards. Notably, formal ISO/IEC 42001 certification is not required.
Organizations that demonstrate genuine adoption of these recognized standards receive a rebuttable presumption of reasonable care and a complete affirmative defense to accusations of discrimination in the deployment of an AI system. It is increasingly important that organizations who deploy AI systems review internal policies and practices ahead of the ADAI taking effect.
Separately from presumptions, the ADAI provides an affirmative defense in an Attorney General enforcement action when a deployer has implemented a qualifying risk management policy and discovers and cures a potential violation of the statute as a result of that policy. If these conditions are met, the deployer can assert that affirmative defense even where a technical violation occurred.
What Critics Are Saying About the Bill
From the outset, SB 24-205 has sparked intense debate nationwide, with critics on both sides highlighting its tensions between innovation and protection. Tech industry groups like the U.S. Chamber of Commerce, Chamber of Progress, and Consumer Technology Association have decried its broad scope and “onerous” requirements—such as annual impact assessments and disclosures—as burdensome, especially for small and mid-sized businesses, potentially stifling AI adoption and creating a compliance “patchwork” that drives firms from Colorado. They argue for bolstering existing anti-discrimination laws over new AI-specific mandates, noting technical challenges in detecting biases in training data.
Consumer advocates, civil rights groups like the Center for Democracy and Technology, and Consumer Reports hail it as a pioneering safeguard but slam “loopholes,” or broad exemptions for narrow procedural tasks or trade secrets that could conceal biases; weak prohibitions on selling discriminatory tools; and easy safe harbors that let companies evade accountability. They push for expansions like outright bans on harmful systems, stronger opt-outs, and broader worker/consumer coverage to better shield vulnerable groups from hidden harms in hiring or lending.
Governor Polis also weighed in with his reservations, urging federal preemption for a “cohesive” approach and “significant improvements” to avoid hampering technology growth. In a May 5, 2025 open letter to the General Assembly—co-signed by Attorney General Phil Weiser, Denver Mayor Mike Johnston, and U.S. Sens./Reps. Bennet, Neguse, and Pettersen—they implored a delay to January 2027, warning a rushed framework could “stifle innovation or drive business away from our state” while allowing collaboration on a “balanced, future-ready” model protecting privacy and fairness.
These divides doomed amendments like SB 25-318 in May 2025, which sought to redefine discrimination, eliminate some duties, and add exemptions, which was indefinitely postponed 5-2 in the Senate Business, Labor, and Technology Committee amid stakeholder clashes. The August 2025 special legislative session yielded only the delay—no substantive changes—after negotiations collapsed over liability and exemptions.
The Growing Cloud of Uncertainty
The ADAI tasks the Colorado Attorney General with official rulemaking based on the statutes provision. As of early December 2025, the Attorney General’s office has yet to commence the formal rule making process, has released no draft rules, no sample forms, and no substantive guidance—despite a late-2024 pre-rulemaking comment period. Companies still do not know the required format of impact assessments, exact consumer notice wording, “reasonable care” standards, documentation retention, or reporting procedures. The formal process under the State Administrative Procedure Act hasn’t started, fueling the delay. Companies across the state and country are monitoring the rulemaking process very closely as this is an opportunity for stakeholders to have their voices heard before enforcement of the ADAI begins.
Many businesses, especially startups and mid-sized firms, are quietly pausing Colorado plans or shifting teams to lower-risk states. Colorado Attorney General Weiser himself stated in August 2025 that the law “is really problematic, it needs to be fixed” to avoid pushing innovation elsewhere.
The Federal Wildcard
Federal pressure is intensifying. The Trump administration’s July 2025 AI Action Plan labels state laws like Colorado’s “burdensome” and ties billions in infrastructure funding (e.g., $42 billion BEAD program) to more permissive approaches. Most recently, on December 11, 2025, President Trump signed an executive order directing the Department of Justice to establish a litigation taskforce focused on challenging state-level AI regulations and laws which the administration deems to restrict innovation and adoption of AI applications. Colorado’s ADAI is specifically singled out in the order signaling that federal scrutiny may soon center on Colorado’s approach.
Nationally, Colorado leads, but bills in Connecticut (passed Senate 2024, stalled House), California, New York, Illinois, Rhode Island, and Washington signal fragmentation. Absent federal action state precedents like Colorado’s, mirroring the EU AI Act’s risk-based model (though narrower in scope/enforcement), will proliferate, complicating multi-state operations.
What Forward-Thinking Companies Are Doing Today
The clients who feel most in control are refusing to wait. They are conducting comprehensive AI inventories to classify high-risk systems (current and planned); evaluating frameworks like NIST for safe-harbor alignment; building documentation infrastructure; balancing transparency and trade secrets; reviewing vendor contracts for indemnification and disclosures; fostering cross-functional coordination (legal, IT, HR, business); and preparing comments for the Attorney General’s rulemaking—the last chance to shape practical rules. The AG’s office maintains a mailing list for updates.
The Bottom Line
For the time being, the Colorado Artificial Intelligence Act is set to take full effect on June 30, 2026. With its broad reach, the ADAI overlaps with federal anti-discrimination rules, state and federal privacy laws, and a growing wave of other AI regulations. This makes compliance far more than a check-the-box exercise—it requires a coordinated, company-wide approach.
Right now, companies face a perfect storm: no official rules forthcoming from the Attorney General and the looming possibility of federal preemption or executive branch challenges. As a result, Colorado’s law has become one of the single biggest sources of AI-related regulatory uncertainty for U.S. businesses as we head into 2026.
The good news? You still have time to get ahead of it, but the practical window is closing faster than the calendar might suggest. Every organization’s risk profile is unique, and the safest, most efficient way to cut through the noise is to sit down with an attorney who tracks these developments day-to-day.


