Congress might block state AI laws for a decade. Here’s what it means. | TechCrunch
A federal proposal that would ban states and local governments from regulating AI for 10 years could soon be signed into law, as Sen. Ted Cruz (R-TX) and other lawmakers work to secure its inclusion into a GOP megabill ahead of a key July 4 deadline.
Those in favor – including OpenAI’s Sam Altman, Anduril’s Palmer Luckey, and a16z’s Marc Andreessen – argue that a “patchwork” of AI regulation among states would stifle American innovation at a time when the race to beat China is heating up.
Critics include most Democrats, several Republicans, Anthropic’s CEO Dario Amodei, labor groups, AI safety nonprofits, and consumer rights advocates. They warn that this provision would block states from passing laws that protect consumers from AI harms and would effectively allow powerful AI firms to operate without much oversight or accountability.
The so-called “AI moratorium” was squeezed into the budget reconciliation bill, nicknamed the “Big Beautiful Bill,” in May. It is designed to prohibit states from “[enforcing] any law or regulation regulating [AI] models, [AI] systems, or automated decision systems” for a decade.
Such a measure could preempt state AI laws that have already passed, such as California’s AB 2013, which requires companies to reveal the data used to train AI systems, and Tennessee’s ELVIS Act, which protects musicians and creators from AI-generated impersonations.
The moratorium’s reach extends far beyond these examples. Public Citizen has compiled a database of AI-related laws that could be affected by the moratorium. The database reveals that many states have passed laws that overlap, which could actually make it easier for AI companies to navigate the “patchwork.” For example, Alabama, Arizona, California, Delaware, Hawaii, Indiana, Montana and Texas have criminalized or created civil liability for distributing deceptive AI-generated media meant to influence elections.
The AI moratorium also threatens several noteworthy AI safety bills awaiting signature, including New York’s RAISE Act, which would require large AI labs nationwide to publish thorough safety reports.
Getting the moratorium into a budget bill has required some creative maneuvering. Because provisions in a budget bill must have a direct fiscal impact, Cruz revised the proposal in June to make compliance with the AI moratorium a condition for states to receive funds from the $42 billion Broadband Equity Access and Deployment (BEAD) program.
Cruz then released another revision on Wednesday, which he says ties the requirement only to the new $500 million in BEAD funding included in the bill – a separate, additional pot of money. However, close examination of the revised text finds the language also threatens to pull already-obligated broadband funding from states that don’t comply.
Sen. Maria Cantwell (D-WA) criticized Cruz’s reconciliation language on Thursday, claiming the provision “forces states receiving BEAD funding to choose between expanding broadband or protecting consumers from AI harms for ten years.”
What’s next?
Currently, the provision is at a standstill. Cruz’s initial revision passed the procedural review earlier this week, which meant that the AI moratorium would be included in the final bill. However, reporting today from Punchbowl News and Bloomberg suggest that talks have reopened, and conversations on the AI moratorium’s language are ongoing.
Sources familiar with the matter tell TechCrunch they expect the Senate to begin heavy debate this week on amendments to the budget, including one that would strike the AI moratorium. That will be followed by a vote-a-rama – a series of rapid votes on the full slate of amendments.
Chris Lehane, chief global affairs officer at OpenAI, said in a LinkedIn post that the “current patchwork approach to regulating AI isn’t working and will continue to worsen if we stay on this path.” He said this would have “serious implications” for the U.S. as it races to establish AI dominance over China.
“While not someone I’d typically quote, Vladimir Putin has said that whoever prevails will determine the direction of the world going forward,” Lehane wrote.
OpenAI CEO Sam Altman shared similar sentiments this week during a live recording of the tech podcast Hard Fork. He said while he believes some adaptive regulation that addresses the biggest existential risks of AI would be good, “a patchwork across the states would probably be a real mess and very difficult to offer services under.”
Altman also questioned whether policymakers were equipped to handle regulating AI when the technology moves so quickly.
“I worry that if…we kick off a three-year process to write something that’s very detailed and covers a lot of cases, the technology will just move very quickly,” he said.
But a closer look at existing state laws tells a different story. Most state AI laws that exist today aren’t far-reaching; they focus on protecting consumers and individuals from specific harms, like deepfakes, fraud, discrimination, and privacy violations. They target the use of AI in contexts like hiring, housing, credit, healthcare, and elections, and include disclosure requirements and algorithmic bias safeguards.
TechCrunch has asked Lehane and other members of OpenAI’s team if they could name any current state laws that have hindered the tech giant’s ability to progress its technology and release new models. We also asked why navigating different state laws would be considered too complex, given OpenAI’s progress on technologies that may automate a wide range of white-collar jobs in the coming years.
TechCrunch asked similar questions of Meta, Google, Amazon, and Apple, but has not received any answers.
The case against preemption

“The patchwork argument is something that we have heard since the beginning of consumer advocacy time,” Emily Peterson-Cassin, corporate power director at internet activist group Demand Progress, told TechCrunch. “But the fact is that companies comply with different state regulations all the time. The most powerful companies in the world? Yes. Yes, you can.”
Opponents and cynics alike say the AI moratorium isn’t about innovation – it’s about sidestepping oversight. While many states have passed regulation around AI, Congress, which moves notoriously slowly, has passed zero laws regulating AI.
“If the federal government wants to pass strong AI safety legislation, and then preempt the states’ ability to do that, I’d be the first to be very excited about that,” said Nathan Calvin, VP of state affairs at the nonprofit Encode – which has sponsored several state AI safety bills – in an interview. “This takes away all leverage, and any ability, to force AI companies to come to the negotiating table.”
One of the loudest critics of the proposal is Anthropic CEO Dario Amodei. In an opinion piece for The New York Times, Amodei said “a 10-year moratorium is far too blunt an instrument.”
“AI is advancing too head-spinningly fast,” he wrote. “I believe that these systems could change the world, fundamentally, within two years; in 10 years, all bets are off. Without a clear plan for a federal response, a moratorium would give us the worst of both worlds — no ability for states to act, and no national policy as a backstop.”
He argued that instead of prescribing how companies should release their products, the government should work with AI companies to create a transparency standard for how companies share information about their practices and model capabilities.
The opposition isn’t limited to Democrats. There’s been notable opposition to the AI moratorium from Republicans who argue the provision stomps on the GOP’s traditional support for states’ rights, even though it was crafted by prominent Republicans like Cruz and Rep. Jay Obernolte.
These Republican critics include Senator Josh Hawley (R-MO) who is concerned about states’ rights and is working with Democrats to strip it from the bill. Senator Marsha Blackburn (R-TN) also criticized the provision, arguing that states need to protect their citizens and creative industries from AI harms. Rep. Marjorie Taylor Greene (R-GA) even went so far as to say she would oppose the entire budget if the moratorium remains.
What do Americans want?
Republicans like Cruz and Senate Majority Leader John Thune say they want a “light touch” approach to AI governance. Cruz also said in a statement that “every American deserves a voice in shaping” the future.
However, a recent Pew Research survey found that most Americans seem to want more regulation around AI. The survey found that about 60% of U.S. adults and 56% of AI experts say they’re more concerned that the U.S. government won’t go far enough in regulating AI than they are that the government will go too far. Americans also largely aren’t confident that the government will regulate AI effectively, and they are skeptical of industry efforts around responsible AI.