On March 3, 2026, Axios, citing sources familiar with the matter, reported that OpenAI and the U.S. Department of Defense (the Pentagon) have agreed to supplement recently signed artificial intelligence cooperation agreements with stricter restrictive clauses in response to external criticism over the 'risk of large-scale domestic surveillance.' The newly added clauses explicitly state that, in compliance with laws such as the Fourth Amendment to the U.S. Constitution, the 1947 National Security Act, and the 1978 Foreign Intelligence Surveillance Act, the artificial intelligence system shall not be deliberately used for domestic surveillance of U.S. citizens and nationals. This includes prohibiting the intentional tracking, monitoring, or surveillance of U.S. citizens or nationals through the procurement or use of commercially obtained personal or identifiable information. However, the relevant newly added wording has not yet been officially signed into effect. Previously, Anthropic's agreement with the Pentagon regarding the use of the Claude model for national security-related cooperation had sparked controversy, putting the future of OpenAI's collaboration with the Department of Defense under pressure. OpenAI CEO Sam Altman proactively engaged with Emil Michael, the U.S. Under Secretary of Defense for Research and Engineering, to push for revisions to the contract terms to strengthen privacy and compliance boundaries. Currently, the relevant agreement text is still under revision, with the final version and implementation details pending official confirmation.
