作者:Matt Sledge
Not long ago, artificial intelligence executives were asking Congress for more regulation. The House of Representatives budget bill passed last week demonstrates how quickly the industry has changed course.
Inside the House bill is a moratorium on the sort of state-level AI regulations that have addressed political “deepfakes” and using AI to deny medical claims. At the same time the House bill cuts Medicare, it would funnel more money to tech companies to develop kamikaze drones.
For proponents of AI regulation, the House bill is the culmination of a shift in the industry’s mindset. Rather than paying lip service to popular concerns about AI, the industry has decided to partner with the Trump administration on its goal of “global AI dominance.”
“The message is clear. The House Republican proposal is stealing from poor people to give huge handouts to Big Tech to build technology that is going to perpetuate the president’s authoritarian plans and crackdowns against vulnerable people,” said Kevin De Liban, the founder of TechTonic Justice, a nonprofit aimed at preventing tech from harming low-income people.
The splashiest measure in the House bill may also be one of the least likely provisions to make it into law. States would be banned from drafting their own AI regulations for the next 10 years.
At a recent House hearing, Rep. Jay Obernolte, R-Calif., said the provision was motivated in part by frustration over Congress’s failure to pass nationwide regulation. The growing patchwork of state regulations presents a challenge for small startups, he said at a recent House hearing.
“The people who can’t deal with that are two innovators in a garage trying to start the next OpenAI, the next Google. Those are the people we’re trying to protect,” he said. “I would love this to be months, not years. But I think it’s important to send the message that everyone needs to be motivated to come to the table here.”
The patchwork is hardly as daunting as Obernolte claims it is, argued Amba Kak, co-executive director of the AI Now Institute, a think tank that opposes commercial surveillance. The most sweeping state legislation that has actually passed — in California and Colorado — mostly addresses transparency about when AI is being used, she said. Laws in other states are designed to go after the worst-of-the-worst actors in the developing field, Kak said. Those laws target political “deepfakes,” AI “revenge porn,” and the use of AI by health insurance companies.
“This is not a sprawling morass of industry-burdening regulation. It truly is the bare minimum,” Kak said.
The regulation moratorium passed by the House has sparked pushback from a bipartisan group of 40 state attorneys general, who called it “irresponsible,” and from consumer protection groups including Consumer Reports.
The biggest companies in the AI industry have largely sidestepped commenting directly on the moratorium, but OpenAI CEO Sam Altman gave his broad thoughts on regulation at a Senate hearing earlier this month.
In 2023, Altman made waves by calling for regulation of the industry. This month, however, he gave Senate testimony saying that it would be “disastrous” if the U.S. adopted regulations along the lines of Europe’s, which is set to require registration for “high-risk” AI systems and imposes other requirements on their developers. More broadly, he called for limited regulations of the industry.
“I think some policy is good. I think it is easy for it to go too far and as I’ve learned more about how the world works, I’m more afraid that it could go too far and have really bad consequences,” he said.
Observers are skeptical that the House moratorium will make it through the Senate. Republicans are using a process known as reconciliation to push the bill through the Senate with 50 votes instead of 60, the barrier to overcome a filibuster. A reconciliation guideline known as the Byrd rule, however, requires every provision of a reconciliation bill to center on the budget.
“I feel pretty damn confident that the moratorium is going to fall out,” said Bobby Kogan, senior director of federal budget policy at the Center for American Progress, who previously worked for the Biden White House and as a Democratic staffer on the Senate Budget Committee. “Telling a state what laws it is going to make has nothing to do with getting federal dollars in or out the door.”
Sen. Josh Hawley, R-Mo., has promised to fight against the moratorium, and Sen. John Cornyn, R-Texas, a budget committee member, has expressed his skepticism that the measure abides by Senate rules.
Whatever becomes of the House provision, AI watchdogs say it could herald the beginning of an era where state-level regulation faces more hurdles.
“We are probably going to see the ramping up of industry lobbying,” said Kak. “If this is where we are headed, I think it really ups the stakes for any state legislative effort.”
House Republicans appear to be on firmer footing with a provision to spend $25 million on AI contracts to detect and recoup Medicare fraud.
That provision worries AI skeptics, who point to the history of such programs thus far. In Arkansas, an algorithm that was used to determine Medicaid eligibility cut off people from in-home health assistance.
“It resulted in horrific cuts to my clients’ care,” said De Liban, who worked as a Legal Aid lawyer in Arkansas at the time. “People who formerly had been getting eight hours of care, which isn’t enough but is the state’s maximum, were cut to four or five hours of care. … They were lying in waste for hours on end.”
In Texas, hundreds of Medicaid patients were also allegedly booted out of the program or faced hurdles because of errors in a state algorithm. Doctors and patients should be worried about the the House fraud provision given the history of artificial intelligence and algorithms so far, De Liban said.
“We are going to use faulty, unreliable technology to probably keep doctors from being able to provide care to patients,” he said.
The money for fraud prevention pales in comparison to other spending on AI. The bill includes billions of dollars for the Pentagon and border security — a promising market for a wave of Silicon Valley-funded defense startups.
This proposed wave of spending on AI defense and border projects lines up with a shift in attitudes toward guardrails on the technology, Kak said.
“If anything, this regulatory sentiment that we are seeing in the moratorium ports over even more aggressively to the national security space, where we are hearing that this is a moment to roll back on protections,” she said.
The provisions in the House bill include $500 million for “attritable autonomous military capabilities” — think low-cost drones such as the ones Ukraine has used to defend against Russia; $450 million for AI and autonomous capabilities in naval shipbuilding; $298 million for AI and autonomous projects at the Pentagon’s Test Management Resource Center; $250 million for artificial intelligence at Cyber Command; $188 million for “maritime robotic autonomous systems”; $120 million for aerial surveillance drones; $111 million for kamikaze attack drones; and $20 million for using artificial intelligence in the Defense Department’s auditing process.
The alphabet soup of agencies involved in Trump’s border security and deportation campaign also benefit from the bill. U.S. Customs and Border Protection receives $2.7 billion for border surveillance technologies, $1 billion for anti-narcotics projects including artificial intelligence. The Coast Guard would also receive $75 million for “autonomous maritime systems.”
Separately, the Commerce Department would receive $500 million for federal government IT modernization.
Though Kak is skeptical that the AI defense contractors’ products are as transformational as they promise, these technologies have won favor in the White House. Trump, who once called AI “dangerous,” has tapped industry boosters such as JD Vance and David Sacks for key roles in his administration. One of his first executive orders called on agencies to remove barriers to AI adoption imposed by the Biden administration.
“It is the policy of the United States to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security,” Trump said.