Colorado established law for Consumer Protections for AI!

The Colorado General Assembly Act  which “…requires a developer of a high-risk artificial intelligence system (high-risk system) to use reasonable care to avoid algorithmic discrimination in the high-risk system.”  The SB 24-205 was signed into law on May 17, 2024 and titled “Consumer Protections for Artificial Intelligence” (https://leg.colorado.gov/bills/sb24-205) and included these statements:

There is a rebuttable presumption that a developer used reasonable care if the developer complied with specified provisions in the bill, including:

•            Making available to a deployer of the high-risk system a statement disclosing specified information about the high-risk system;

•            Making available to a deployer of the high-risk system information and documentation necessary to complete an impact assessment of the high-risk system;

•            Making a publicly available statement summarizing the types of high-risk systems that the developer has developed or intentionally and substantially modified and currently makes available to a deployer and how the developer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from the development or intentional and substantial modification of each of these high-risk systems; and

•            Disclosing to the attorney general and known deployers of the high-risk system any known or reasonably foreseeable risk of algorithmic discrimination, within 90 days after the discovery or receipt of a credible report from the deployer, that the high-risk system has caused or is reasonably likely to have caused.

The bill also requires a deployer of a high-risk system to use reasonable care to avoid algorithmic discrimination in the high-risk system. There is a rebuttable presumption that a deployer used reasonable care if the deployer complied with specified provisions in the bill, including:

•            Implementing a risk management policy and program for the high-risk system;

•            Completing an impact assessment of the high-risk system;

•            Annually reviewing the deployment of each high-risk system deployed by the deployer to ensure that the high-risk system is not causing algorithmic discrimination;

•            Notifying a consumer of specified items if the high-risk system makes a consequential decision concerning a consumer;

•            Providing a consumer with an opportunity to correct any incorrect personal data that a high-risk artificial intelligence system processed in making a consequential decision; and

•            Providing a consumer with an opportunity to appeal, via human review if technically feasible, an adverse consequential decision concerning the consumer arising from the deployment of a high-risk artificial intelligence system;

•            Making a publicly available statement summarizing the types of high-risk systems that the deployer currently deploys, and how the deployer manages any known or reasonably foreseeable risks of algorithmic discrimination that may arise from deployment of each of these high-risk systems, and the nature, source, and extent of the information collected and used by the deployer ; and

•            Disclosing to the attorney general the discovery of algorithmic discrimination, within 90 days after the discovery, that the high-risk system has caused or is reasonably likely to have caused.

Good news for Colorado and guidance for other states.

Previous
Previous

What do you think about the AI Antitrust investigations?

Next
Next

Any surprise that Ransomware against healthcare and manufacturing is on the rise?