Credit: Bloomberg via Getty Images
On Thursday, Anthropic CEO Dario Amodei argued against a proposed 10-year moratorium on state AI regulation in a New York Times opinion piece, calling the measure shortsighted and overbroad as Congress considers including it in President Trump's tax policy bill. Anthropic makes Claude, an AI assistant similar to ChatGPT.
Amodei warned that AI is advancing too fast for such a long freeze, predicting these systems "could change the world, fundamentally, within two years; in 10 years, all bets are off."
As we covered in May, the moratorium would prevent states from regulating AI for a decade. A bipartisan group of state attorneys general has opposed the measure, which would preempt AI laws and regulations recently passed in dozens of states.
In his op-ed piece, Amodei said the proposed moratorium aims to prevent inconsistent state laws that could burden companies or compromise America's competitive position against China. "I am sympathetic to these concerns," Amodei wrote. "But a 10-year moratorium is far too blunt an instrument. A.I. is advancing too head-spinningly fast."
Instead of a blanket moratorium, Amodei proposed that the White House and Congress create a federal transparency standard requiring frontier AI developers to publicly disclose their testing policies and safety measures. Under this framework, companies working on the most capable AI models would need to publish on their websites how they test for various risks and what steps they take before release.
"Without a clear plan for a federal response, a moratorium would give us the worst of both worlds—no ability for states to act and no national policy as a backstop," Amodei wrote.
Amodei emphasized his claims for AI's transformative potential throughout his op-ed, citing examples of pharmaceutical companies drafting clinical study reports in minutes instead of weeks and AI helping to diagnose medical conditions that might otherwise be missed. He wrote that AI "could accelerate economic growth to an extent not seen for a century, improving everyone's quality of life," a claim that some skeptics believe may be overhyped.
To illustrate why transparency matters, Amodei described how Anthropic recently tested its latest model, Claude 4 Opus, in extreme and deliberate experimental "science fiction"-sounding scenarios, according to AI expert Simon Willison, discovering that it would threaten to expose a user's affair if faced with being shut down. Amodei stressed this was deliberate testing to get early warnings, "much like an airplane manufacturer might test a plane's performance in a wind tunnel."
Amodei cited other tests in the industry that have revealed similar negative behaviors when prodded into producing them—OpenAI's o3 model reportedly wrote code to prevent its own shutdown during tests conducted by an AI research lab (led by people, it should be noted, that openly worry that AI poses an existential threat to humanity), while Google reported its Gemini model is approaching capabilities that could help users carry out cyberattacks. Amodei cited these tests not as imminent threats but as examples of why companies need to be transparent about their testing and safety measures.
Currently, Anthropic, OpenAI, and Google DeepMind have voluntarily adopted policies that include what they call "safety testing" and public reporting. But Amodei argues that as models become more complex, corporate incentives to maintain transparency might change without legislative requirements.
His proposed transparency standard would codify existing practices at major AI companies while ensuring continued disclosure as the technology advances, he said. If adopted federally, it could supersede state laws to create a unified framework, addressing concerns about regulatory patchwork while maintaining oversight.
"We can hope that all AI companies will join in a commitment to openness and responsible AI development, as some currently do," Amodei wrote. "But we don’t rely on hope in other vital sectors, and we shouldn’t have to rely on it here, either."