英语轻松读发新版了,欢迎下载、更新

The One Big Beautiful Bill Act would ban states from regulating AI

2025-05-25 09:00:00 英文原文

States couldn't enact consumer AI protections for 10 years, if the bill passes.

 By 

Rebecca Ruiz

 on 

Republican members of Congress hold a press conference about the One Big Beautiful Bill Act.

Congressional Republicans have included a moratorium on state AI regulations in their budget bill. Credit: Anna Moneymaker / Staff / Getty Images News

Buried in the Republican budget bill is a proposal that will radically change how artificial intelligence develops in the U.S., according to both its supporters and critics. The provision would ban states from regulating AI for the next decade.

Opponents say the moratorium is so broadly written that states wouldn't be able to enact protections for consumers affected by harmful applications of AI, like discriminatory employment tools, deepfakes, and addictive chatbots.

Instead, consumers would have to wait for Congress to pass its own federal legislation to address those concerns. Currently it has no draft of such a bill. If Congress fails to act, consumers will have little recourse until the end of the decade-long ban, unless they decide to sue companies responsible for alleged harms.

Proponents of the proposal, which include the Chamber of Commerce, say that it will ensure America's global dominance in AI by freeing small and large companies from what they describe as a burdensome patchwork of state-by-state regulations.

But many say the provision's scope, scale, and timeline is without precedent — and a big gift to tech companies, including ones that donated to President Donald Trump.

This week, a coalition of 77 advocacy organizations, including Common Sense Media, Fairplay, and the Center For Humane Technology, called on congressional leadership to jettison the provision from the GOP-led budget.

"By wiping out all existing and future state AI laws without putting new federal protections in place, AI companies would get exactly what they want: no rules, no accountability, and total control," the coalition wrote in an open letter.

Mashable Light Speed

Some states already have AI-related laws on the books. In Tennessee, for example, a state law known as the ELVIS Act was written to prevent the impersonation of a musician's voice using AI. Republican Sen. Marsha Blackburn, who represents Tennessee in Congress, recently hailed the act's protections and said a moratorium on regulation can't come before a federal bill.

Other states have drafted legislation to address specific emerging concerns, particularly related to youth safety. California has two bills that would place guardrails on AI companion platforms, which advocates say are currently not safe for teens.

One of the bills specifically outlaws high-risk uses of AI, including "anthropomorphic chatbots that offer companionship" to children and will likely lead to emotional attachment or manipulation.

Camille Carlton, policy director at the Center for Humane Technology, says that while remaining competitive amidst greater regulation may be a valid concern for smaller AI companies, states are not proposing or passing expansive restrictions that would fundamentally hinder them. Nor are they targeting companies' ability to innovate in areas that would make America truly world-leading, like in health care, security, and the sciences. Instead, they are focused on key areas of safety, like fraud and privacy. They're also tailoring bills to cover larger companies or offering tiered responsibilities appropriate to a company's size.

Historically, tech companies have lobbied against certain state regulations, arguing that federal legislation would be preferable, Carlton says. But then they lobby Congress to water down or kill their own regulatory bills too, she notes.

Arguably, that's why Congress hasn't passed any major encompassing consumer protections related to digital technology in the decades since the internet became ascendant, Carlton says. She adds that consumers may see the same pattern play out with AI, too.

Some experts are particularly worried that a hands-off approach to regulating AI will only repeat what happened when social media companies first operated without much interference. They say that came at the cost of youth mental health.

Gaia Bernstein, a tech policy expert and professor at the Seton Hall University School of Law, says that states have increasingly been at the forefront of regulating social media and tech companies, particularly with regard to data privacy and youth safety. Now they're doing the same for AI.

Bernstein says that in order to protect kids from excessive screen time and other online harms, states also need to regulate AI, because of how frequently the technology is used in algorithms. Presumably, the moratorium would prohibit states from doing so.

"Most protections are coming from the states. Congress has largely been unable to do anything," Bernstein says. "If you're saying that states cannot do anything, then it's very alarming, because where are any protections going to come from?"

Rebecca Ruiz

Rebecca Ruiz is a Senior Reporter at Mashable. She frequently covers mental health, digital culture, and technology. Her areas of expertise include suicide prevention, screen use and mental health, parenting, youth well-being, and meditation and mindfulness. Rebecca's experience prior to Mashable includes working as a staff writer, reporter, and editor at NBC News Digital and as a staff writer at Forbes. Rebecca has a B.A. from Sarah Lawrence College and a masters degree from U.C. Berkeley's Graduate School of Journalism.

These newsletters may contain advertising, deals, or affiliate links. By clicking Subscribe, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.

关于《The One Big Beautiful Bill Act would ban states from regulating AI》的评论


暂无评论

发表评论

摘要

Congressional Republicans have included a provision in their budget bill that would ban states from regulating artificial intelligence (AI) for the next decade. Proponents argue it will ensure America's global dominance in AI by reducing regulatory burdens on companies, while opponents fear it could leave consumers without protections against harmful AI applications until Congress passes federal legislation, which currently lacks a draft. States such as Tennessee and California already have AI-related laws focusing on safety concerns like fraud and privacy, but the proposed moratorium would prevent further regulation. Critics argue this move benefits tech companies at the expense of public safety and accountability.