Credit: Bloomberg via Getty Images
Anthropic's AI models could potentially help spies analyze classified documents, but the company draws the line at domestic surveillance. That restriction is reportedly making the Trump administration angry.
On Tuesday, Semafor reported that Anthropic faces growing hostility from the Trump administration over the AI company's restrictions on law enforcement uses of its Claude models. Two senior White House officials told the outlet that federal contractors working with agencies like the FBI and Secret Service have run into roadblocks when attempting to use Claude for surveillance tasks.
The friction stems from Anthropic's usage policies that prohibit domestic surveillance applications. The officials, who spoke to Semafor anonymously, said they worry that Anthropic enforces its policies selectively based on politics and uses vague terminology that allows for a broad interpretation of its rules.
The restrictions affect private contractors working with law enforcement agencies who need AI models for their work. In some cases, Anthropic's Claude models are the only AI systems cleared for top-secret security situations through Amazon Web Services' GovCloud, according to the officials.
Anthropic offers a specific service for national security customers and made a deal with the federal government to provide its services to agencies for a nominal $1 fee. The company also works with the Department of Defense, though its policies still prohibit the use of its models for weapons development.
In August, OpenAI announced a competing agreement to supply more than 2 million federal executive branch workers with ChatGPT Enterprise access for $1 per agency for one year. The deal came one day after the General Services Administration signed a blanket agreement allowing OpenAI, Google, and Anthropic to supply tools to federal workers.
The timing of the friction with the Trump administration creates complications for Anthropic as the company reportedly conducts media outreach in Washington. The administration has repeatedly positioned American AI companies as key players in global competition and expects reciprocal cooperation from these firms. However, this is not Anthropic's first known conflict with Trump administration officials. The company previously opposed proposed legislation that would have prevented US states from passing their own AI regulations.
In general, Anthropic has been walking a difficult road between maintaining its company values, seeking contracts, and raising venture capital to support its business. For example, in November 2024, Anthropic announced a partnership with Palantir and Amazon Web Services to bring Claude to US intelligence and defense agencies through Palantir's Impact Level 6 environment, which handles data up to the "secret" classification level. The partnership drew criticism from some in the AI ethics community who saw it as contradictory to Anthropic's stated focus on AI safety.
On the larger stage, the potential surveillance capabilities of AI language models have drawn scrutiny from security researchers. In a December 2023 Slate editorial, security researcher Bruce Schneier warned that AI models could enable unprecedented mass spying by automating the analysis and summarization of vast conversation datasets. He noted that traditional spying methods require intensive human labor, but AI systems can process communications at scale, potentially shifting surveillance from observing actions to interpreting intent through sentiment analysis.
As AI models become capable of processing human communications at unprecedented scale, the battle over who gets to use them for surveillance (and under what rules) is just getting started.