Colorado Enacts First-Ever Consumer Protection Law Regulating AI
Colorado has become a trailblazer in AI legislation with the passing of Senate Bill 24-205, known as the “Colorado AI Act
Colorado Enacts First-Ever Consumer Protection Law Regulating AI
Colorado has become a trailblazer in AI legislation with the passing of Senate Bill 24-205, known as the “Colorado AI Act,” effective from February 1, 2026. This legislation focuses on addressing unlawful discrimination in crucial consumer areas such as lending and employment, by setting forth detailed obligations for companies utilizing AI in decision-making processes.
The Colorado AI Act applies specifically to developers and deployers of "high-risk artificial intelligence systems," defined as AI that plays a substantial role in making consequential decisions. These decisions include matters related to education, employment, financial services, government services, healthcare, housing, insurance, and legal services. It targets AI systems used in decision-making contexts, rather than those involved in generative AI applications.
The legislation was designed to mitigate concerns regarding "algorithmic discrimination," which refers to discriminatory treatment or impact based on protected characteristics such as age, colour, disability, ethnicity, genetic information, language proficiency, national origin, race, religion, reproductive health, sex, veteran status, or other protected classifications under state or federal law.
Under the Colorado AI Act, developers of high-risk AI systems must implement a risk management program aimed at preventing algorithmic discrimination. They are required to exercise reasonable care in safeguarding consumers from known or foreseeable risks of bias. Developers must also furnish detailed documentation to deployers, including a statement outlining potential uses, known risks, training data, limitations, purposes, intended benefits, and monitoring procedures for bias in algorithmic decisions.
Deployers of these AI systems must comply with an affirmative duty of reasonable care towards consumers, ensuring protection from known or foreseeable risks of algorithmic discrimination. This duty includes providing consumer notifications, ongoing monitoring programs, and regular system analysis reports. Consumers must be informed when AI systems are used in decision-making processes, with disclosures about system purposes, decision nature, and the right to opt-out.
Additionally, deployers must disclose reasons for adverse decisions facilitated by AI, including data sources, and offer avenues for appeal and correction of inaccurate data. The legislation mandates annual data impact assessments for each AI decision-making system deployed in Colorado.
Notably, the Colorado AI Act does not include a private right of action, with enforcement left to the state attorney general as an unfair trade practice. Companies involved in developing, selling, licensing, or deploying decision-making AI systems will need to comply with these new requirements, notifications, and explanations by the statutory deadline.
Under this new law, companies deploying "high-risk artificial intelligence systems" must adhere to stringent guidelines. These systems are defined as those that significantly influence consequential decisions affecting individuals. The legislation aims to mitigate "algorithmic discrimination," defined as any unfair treatment based on protected characteristics such as age, race, disability, and others recognized under state or federal law.
Developers and deployers of these AI systems are required to implement robust risk management programs. They must exercise reasonable care to prevent discriminatory outcomes, provide comprehensive documentation outlining the system's capabilities and limitations, and disclose the data and methodologies used in decision-making processes. Additionally, consumers affected by AI decisions must be informed about the system's purpose, how decisions are made, and their right to opt out.
Importantly, the Colorado AI Act does not provide for private lawsuits but authorizes enforcement by the state attorney general as an unfair trade practice. Companies involved in developing, selling, or using decision-making AI systems in Colorado must prepare to comply with these new requirements by 2026.