Why Colorado’s new AI law has the right focus, but the wrong timing
Last month Colorado became the first state to pass a law regulating the development and use of AI systems, the full text of which can be found here. Even though the law may ultimately be amended or preempted before it goes into effect on February 1, 2026, the Colorado AI law provides a great opportunity to take stock of how regulators are thinking about AI legislation, particularly when compared to more aggressive attempts like California’s proposed Senate Bill 1047.
The Colorado AI Law
The main focus of the Colorado AI law is to prevent discrimination by “high risk AI systems,” defined as AI systems that make, or are related to making, “consequential decisions.” A consequential decision is defined as a “decision that has a material legal or similarly significant effect on the provision or denial” of education enrollment or opportunity, employment, financial or lending services, essential government services, healthcare services, housing, insurance, or legal services.
The law is intended to be narrow in scope. Along with only targeting “high risk AI systems” it also includes various exemptions including for smaller businesses (companies with under 50 employees who use AI in a limited way) some insurers, and researchers working for or with the approval of the federal government.
For people or businesses that are covered, the law creates responsibilities for two categories of actors: developers and deployers. Developers are defined as a “person doing business in [Colorado] that develops or intentionally and substantially modifies an artificial intelligence system,” while deployers are defined as a “person doing business in [Colorado] that deploys a high-risk artificial intelligence system.” The law imposes familiar requirements on both groups mainly pertaining to disclosure and internal system maintenance, and creates a rebuttable presumption that the developer or deployer acted with reasonable care should they follow the law’s guidelines.
In addition to these general requirements, developers are required to provide documentation to deployers disclosing things like foreseeable harmful uses, the type of training data used and other data governance measures. The law also requires that developers make certain disclosures to the public about their high-risk AI systems.
Deployers are required to have internal systems in place so that they can monitor and prevent any instances of discrimination, and report them should they occur. Additionally, deployers must make disclosures to their users informing them about the high-risk AI systems being used and the purposes that those systems are being used for. Deployers must also give consumers the ability to redress or appeal any adverse decisions created by the system.
Colorado v. California
We can appreciate what’s notable about the Colorado AI law by comparing it to another state AI bill making headlines these days: California Senate Bill 1047. It’s worth doing a deeper dive into SB 1047 (we’re planning on doing one but in the meantime this is a great place to start) but for our purposes here, the main thing to note is that SB 1047 focuses not on categories of users (like Colorado) but instead on the size of the model being developed. While the exact thresholds are still being worked out, the stated aim of the bill is to regulate the largest, most expensive models in the industry.
One of the biggest concerns that people have about regulating AI at the model level is that researchers and developers could face liability for harms when products that are powered by their models, but developed and maintained by other third-parties, cause harm. If researchers working on models are held liable for negative consequences that arise from downstream developers leveraging those models, there is a legitimate concern that researchers may decide to limit or curtail access to these models, hilling innovation and development.
Much of the friction here comes from the fact that those developing models often stand in a completely different relationship to end users than those who are using those models to power consumer facing products. Why are developers facing potential liability, ask opponents of SB 1047, when it is the decisions of companies downstream of them (decisions that are often entirely unforeseeable to the original researchers) that are causing the harms? The power company isn’t held liable when a hacker steals someone’s identity simply because it was the power company’s electricity keeping the hacker’s laptop online. In the same way, argue opponents of bills like California’s SB 1047, researchers and developers shouldn’t face liability for what are ultimately the decisions of other third-parties.
Colorado’s AI law provides a potentially helpful framework for solving this difficulty by focusing on the type of actor actually using the AI system, developer or deployer, and not simply the size of the model in question. This creates a more tailored regulatory approach that fits better with the ways that developers and deployers are actually interacting with each other, AI models, and end users. Rather than have to think about endless downstream users, developers of models can think more about how best to serve deployers, who are in turn in the best position to monitor how the systems are affecting end consumers. This sort of system allows regulators to not be entirely hands off, but also cabin responsibility to those harms researchers and developers can reasonably control.
What should AI bills look like going forward
So state legislation should look like Colorado’s AI law going forward, right? Not exactly. While there are certainly things to like about Colorado’s AI law, we think it’s important for state and federal regulators to first ask if AI needs its own specific regulations at all. To the extent state legislators want to think about regulating the AI industry, it might make sense to take a more cautious approach focused on first studying AI (like this bill that made its way through Connecticut’s legislature before ultimately failing this past year).
While policymakers are right to worry about the problems that could come out of AI (just like with any new, groundbreaking technology), we’re not sure that new technologies automatically need new laws. For example, while it’s probably right to think about the ways that AI-powered systems could help people commit fraud, defrauding people is already banned. Until, for example, we know that AI-powered systems are creating more fraud or harder to detect or prosecute frauds, it might make sense to exercise some regulatory restraint before we know exactly what must be proscribed.
If the rise of AI just means that there is more of the same vanilla fraud we’ve always seen maybe the more effective fraud-prevention measure is more enforcement resources for the governments, as opposed to simply more AI-specific laws. And who knows, AI optimists would tell you that the rise of powerful AI systems might mean less fraud (in which case a whole new set of AI fraud laws might have even less of an impact than we might have thought beforehand). Until then governments risk putting the regulatory cart before the horse.