Anthropic releases custom AI chatbot for classified spy work
2 day ago / Read about 8 minute
Source:ArsTechnica
"Claude Gov" is already handling classified information for the US government.


Credit: Anthropic

On Thursday, Anthropic unveiled specialized AI models designed for US national security customers. The company released "Claude Gov" models that were built in response to direct feedback from government clients to handle operations such as strategic planning, intelligence analysis, and operational support. The custom models reportedly already serve US national security agencies, with access restricted to those working in classified environments.

The Claude Gov models differ from Anthropic's consumer and enterprise offerings, also called Claude, in several ways. They reportedly handle classified material, "refuse less" when engaging with classified information, and are customized to handle intelligence and defense documents. The models also feature what Anthropic calls "enhanced proficiency" in languages and dialects critical to national security operations.

Anthropic says the new models underwent the same "safety testing" as all Claude models. The company has been pursuing government contracts as it seeks reliable revenue sources, partnering with Palantir and Amazon Web Services in November to sell AI tools to defense customers.

Anthropic is not the first company to offer specialized chatbot services for intelligence agencies. In 2024, Microsoft launched an isolated version of OpenAI's GPT-4 for the US intelligence community after 18 months of work. That system, which operated on a special government-only network without Internet access, became available to about 10,000 individuals in the intelligence community for testing and answering questions.

Of course, using AI models for intelligence analysis raises concerns about confabulation, where the models generate plausible-sounding, inaccurate information. Since AI neural networks operate on statistical probabilities rather than functioning as databases, they can potentially misinform intelligence agencies if not used properly. For example, the models may produce convincing but incorrect summaries or analyses of sensitive data, creating risks when accuracy is critical for national security decisions.

Competition heats up for AI defense contracts

Anthropic joins other major AI companies competing for lucrative government work, reports TechCrunch. OpenAI is working to build closer ties with the US Defense Department, while Meta recently made its Llama models available to defense partners. Google is developing a version of its Gemini AI model that can operate within classified environments. Business-focused AI company Cohere is also collaborating with Palantir to deploy its models for government use.

The push into defense work represents a shift for some AI companies that previously avoided military applications. These specialized government models often require different capabilities than consumer AI tools, including the ability to process classified information and work with sensitive intelligence data without triggering safety restrictions that might block legitimate government operations.