英语轻松读发新版了,欢迎下载、更新

States are legislating AI, but a moratorium could stall their progress

2025-05-14 20:57:35 英文原文

States are rapidly introducing various bills governing the design and use of artificial intelligence (AI) technologies. In the absence of federal legislation, some state lawmakers are determined to keep pace with rising concerns over consumer harms. Since ChatGPT took over headlines after its introduction in 2022, state houses across the country have been testing the grounds for legislating AI, from asserting improved safety measures to managing public sector use. Nearly 700 AI-related state bills were introduced in 2024, and this number is expected to grow in 2025. However, the current draft of the budget reconciliation bill proposes a 10-year moratorium on states enacting their own laws, therefore imposing restrictions on efforts to prevent bias and discrimination. Given the federal government’s interest in asserting authority over AI regulation, consumers, businesses, and other stakeholders could be affected in the absence of clear standards for consumer protection, enforcement, and compliance throughout the development and deployment of AI models.

States are quickly advancing AI bills 

Both red and blue states are introducing draft AI legislation, and in some cases, have looked across the globe for inspiration, including to the European Union’s AI Act. Some states are being more holistic in their approach to safeguard consumers from algorithmic harms, while others are offering more narrow guidelines aimed at protecting children or regulating specific use cases of AI, including in hiring. 

One of the more prominent examples of comprehensive AI regulation at the state level is Colorado’s AI Act, which was signed into law in May 2024 and will take effect on Feb. 1, 2026. Reminiscent of the EU’s AI Act, Colorado’s law seeks to regulate developers and deployers of “high-risk” AI systems, which are defined as any system that “makes, or is a substantial factor in making, a consequential decision” that has “a material, legal, or similarly significant effect” in categories such as education, employment, financial services, essential government services, health care, housing, insurance, or legal services. One of the bill’s primary goals is to mitigate risks of algorithmic discrimination based on an individual’s protected traits. 

Other states have introduced or voted on similar comprehensive or high-risk bills, including Utah, Texas, Virginia, and California, though with varying and/or less aggressive approaches than the Colorado law. Utah’s Artificial Intelligence Policy Act, which was signed in March 2024, focuses more narrowly on disclosure agreements and consumer protection. Originally set to expire in May 2025, two new bills were signed this year to extend the act’s repeal date until 2027 and narrow the disclosure requirement for AI suppliers. Texas’ original approach, introduced in late 2024 by Rep. Giovanni Capriglione (R-Texas), sought to regulate high-risk systems comparable to the EU, but in March, Capriglione filed a substantially revised version that instead focuses on system development and deployment by government agencies. 

Virginia’s version sought to mitigate algorithmic discrimination and impose fewer restrictions on developers. The proposed legislation was ultimately vetoed by Gov. Glenn Youngkin out of concern that it would dampen innovation. California’s attempt, which focused more on frontier models, was also vetoed last year by Gov. Gavin Newsom, who said he believes that safety standards should be adopted, but the state should not “settle for a solution that is not informed by an empirical trajectory analysis of Al systems and capabilities.” The reactions to these two bills highlight the challenges in passing broad legislation around AI, which has led other states to instead focus on specific risks, harms, or particular use cases. Despite Newsom’s veto on the previously mentioned bill, he signed 18 laws related to more specific uses of AI last year.  

State laws on targeted use cases 

Regulating digital replicas and other deepfakes are the more popular measures legislated in more than half of U.S. states, especially concerning non-consensual intimate imagery (NCII) and child sexual abuse materials (CSAM). In addition to its more comprehensive act, Utah has passed several bills specifically prohibiting sexual deepfakes, prescribing penalties for the distribution of NCII, and strengthening prohibitions on child sexual abuse material. The bipartisan support around the need to protect from these harms, especially for children, is reflected in Congress’ passage of the “TAKE IT DOWN Act,” which makes it illegal to knowingly publish NCII and requires social media platforms to remove such content within 48 hours of a victim’s request. Still, the law doesn’t grant victims the private right to action that some state laws provide or include a preemptive clause. This is an example where compliance challenges can arise in the absence of a broader federal framework. 

Similar requirements have emerged around election-related or political deepfakes. At least 16 states passed bills last year aiming to limit AI’s effect on elections, and despite uncertainty over the true effect of such content in the 2024 general election, even more are now considering similar proposals. These efforts reflect a broader trend toward regulating generative AI and requiring more transparency as the technology becomes widely accessible—an area where states have seen varying degrees of success. Given the free speech implications of this type of legislation, two states—California and Minnesota—have faced legal challenges to their laws, with a federal judge blocking California’s bill last year.  

Some cities and municipalities have also established use-specific laws. In 2021, New York City passed a law that requires audits of AI screening tools. Employers must also provide notice that such tools are being used and allow job applicants the chance to request an alternative process.  

Considerations for further legislation 

The flurry of activities by state lawmakers demonstrates states’ interest in establishing clear frameworks for their constituents, even if this results in a patchwork of uneven mandates. When bills are vetoed or fail to pass, the actions of state leaders still offer insight into the opportunities and limits of state-led AI legislation. Looking ahead, if states continue advancing their own laws, the eventual adoption of national standards could create uncertainty for large tech companies operating across jurisdictions, as well as for developers, distributors, consumers, and state attorneys general (AGs).

Industry leaders have long cited state authority as a challenge for compliance, and varying enforcement approaches across states complicate efforts to push for a one-size-fits-all federal AI policy. But with federal agencies facing threats to their independence, declining interest in consumer protection from the current administration, and limited enforcement resources, states should continue leading to ensure the ethical and safe deployment of AI and to define the role of their AGs in this evolving space.

The active role of state AGs 

State attorneys general will play an increasingly important role in AI oversight, with or without new legislation, especially as opaque technologies complicate product liability. Given the Trump administration’s lack of interest in civil rights and consumer protections, AGs will need to interpret how existing consumer protection and civil rights laws apply to AI. In the 19 states with data privacy mandates, AGs will also have to factor those laws into their decisions. Last year, Texas AG Ken Paxton established a data privacy and security team focused on “aggressive enforcement” of the state’s privacy laws, and soon after reached a settlement with a generative AI health care company accused of making deceptive claims—securing assurances that emphasized clear consumer disclosure.

In January, California’s AG provided legal advisories offering examples of how existing California laws may be used to enforce AI regulations and protect consumers in certain contexts. Since no comprehensive AI legislation has been enacted in California, the AG sought to clarify the application of two existing laws—the California Consumer Privacy Act and the California Invasion of Privacy Act—to AI. That same month, New Jersey’s AG issued similar guidance on the application of the state’s Law Against Discrimination to automated decision-making tools, stating that it prohibits “all forms of discrimination,” whether conducted by humans or artificial systems. Regardless of what states are doing in these areas, the direction of certain federal agencies on existing civil rights laws—such as the Equal Employment Opportunity Commission—will help determine how harms are identified and remedies are implemented, and only if those agencies prioritize such issues.

The impact of a 10-year national moratorium  

The provision for a 10-year national moratorium is included in the 2025 budget reconciliation bill and was introduced by the House Energy and Commerce Committee under Chairman Brett Guthrie (R-Ky.). The language prevents states from creating legal barriers or laws that restrict AI design, performance, civil liability, and documentation, and defines AI as “any computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues a simplified output, including a score, classification, or recommendation, to materially influence or replace human decision making.” While some industry associations have praised the move toward federal legislation over a patchwork of state laws, the National Conference of State Legislators (NCSL) issued a statement opposing the provision, arguing it stifles state authority and innovation. Congress has generally deferred to the states on a range of issues, respecting federalism and the Constitution’s rule that undesignated areas are reserved for state action.

If the national provision passes, it will override states’ legal authority and make it harder to tailor legislative solutions to local concerns. Congress should lead on setting national standards and related laws, but in the meantime, a blanket preemption would halt oversight on privacy and civil rights—even though states are typically able to provide additional protections to address their constituents’ needs.

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).

关于《States are legislating AI, but a moratorium could stall their progress》的评论


暂无评论

发表评论

摘要

States are rapidly introducing numerous AI-related bills in response to growing concerns over consumer harms and lack of federal legislation. Over 700 such bills were introduced in 2024, addressing various aspects including safety measures and public sector use. Notable examples include Colorado's AI Act and Utah’s Artificial Intelligence Policy Act, both aiming to regulate high-risk systems but with different focuses. Other states are focusing on specific risks like deepfakes and election-related content manipulation. However, a proposed 10-year federal moratorium could limit state-level innovation and regulation. State attorneys general also play an increasing role in AI oversight, especially regarding consumer protection and civil rights laws. The debate over federal versus state regulation highlights the challenges in establishing uniform standards for AI governance.

相关新闻