The U.S. Department of Justice has delivered a robust rebuttal in its court filings to the lawsuit initiated by AI startup Anthropic. It asserts that designating Anthropic as a 'supply chain risk' does not violate its First Amendment rights and anticipates that Anthropic's legal challenge will ultimately fall short. The heart of the dispute lies in Anthropic's efforts to limit the use of its Claude model within the military sphere. In contrast, the government perceives it as a risk entity, citing potential compliance penalties and concerns over safety and trust. The Trump administration intends to remove Anthropic from the list of government suppliers—a decision that could result in billions of dollars in losses for the company but has received backing from its industry counterparts. Anthropic is committed to the principle of 'AI safety' and declines to deploy its technology in autonomous weapons or for government surveillance purposes. It now grapples with the challenge of being shut out from the military contracting market. In the meantime, its rival, OpenAI, supported by Microsoft, is already undergoing technology evaluations by the Pentagon.
