MANAGEMENT UPDATE.
HOW ARE STATES REGULATING AI?
In late January, the National Conference of State Legislatures published a highly useful summary, written by Chelsea Canada, which covered three legislative trends that have merged in the ever-more-important realm of artificial intelligence, based on over 450 bills it reviewed. The three trends that rose to the top of its list were consumer protection and transparency; deepfakes (which use AI to create false audio, images or video), and government use of AI.

The following are some of the most significant actions in these three areas:
Consumer Protection and Transparency.
“Lawmakers considered over 100 bills in the two categories of private sector and responsible use,” according to the piece, with three states passing “the first U.S. laws focused on safety and protections for consumers when using AI products.” The three states were California, Colorado and Utah.
For example, Colorado passed the nation’s first comprehensive AI law, which according to the NCSL, “says AI developers and deployers must avoid algorithmic discrimination, defined as any use of AI that results in unlawful differential treatment or that disfavors a group of individuals protected under current state and federal laws based on age, disability, ethnicity or other protected class.”
California’s law “focuses on training data transparency. Starting in 2026, developers of generative AI systems must publicly share what data was used to train the system or service.” Meanwhile, Utah’s new law establishes liability “for use of artificial intelligence (AI) that violates consumer protection laws if not properly disclosed,” according to a state description of the law as enacted.
Deepfakes
“At least half of states passed over 40 new laws for deepfake technology,” reported the NCSL. “At least 19 states passed legislation related to sexually explicit deepfakes. Of those states, 12 focused generally on fabricated sexually explicit images, and others focused on these types of materials that depict minors. Florida, for example has made it a crime to create computer generated child pornography.
More than 10 states put in safeguards regarding the use of deepfakes in elections. Arizona, for example, passed a law that “allows for any candidate to sue if a ‘digital impersonation’ of the person is published,” and another that “requires disclosure of the use of a deepfake of a political candidate within 90 days of an election.”
Government Use
With many state agencies rolling out a wide variety of plans to use AI in their operations, at least ten states have taken legislative steps to require an inventory of these uses, and the services they deliver. These states included Connecticut, Delaware, Maryland, Vermont and West Virginia.
Meanwhile, “To address concerns about possible bias, discrimination and disparate impact, states such as Connecticut, Maryland, Vermont, Virginia and Washington mandated that state agencies perform impact assessments to ensure that the AI systems in use are ethical, trustworthy and beneficial.”
For more information on AI legislation and additional resources on AI, visit NCSL’s AI Policy Toolkit.
#StateArtificialIntelligenceLegislation #StateManagementArtificialIntelligence #StateGovernmentPolicyandManagement #StateTechnologyManagement #StateGovernmentGenrativeAIPolicyandManagement #ArtificialIntelligenceAndStateConsumerProtection #ArtificialIntelligenceAndStateTransparency #ArtificialIntelligenceandDeepFakes #2024StateArtificialIntelligenceLegislation #StateArtificialIntelligenceGovernance #StateGovernmentUseOfArtificialIntelligence #StateArtificialIntelligenceBiasLegislation #StateArtificialIntelligenceTransparencyLegislation #BandGWeeklyManagementSelection #StateandLocalManagementNews #StateandLocalArtificialIntelligenceNews #BarrettandGreeneInc