U.S.-based artificial intelligence firm Anthropic has revised its terms of service, explicitly banning parent companies headquartered in China, Russia, North Korea, Iran, and other nations, along with their overseas affiliates, from accessing its AI tool, Claude. The decision is part of a broader effort to block entities from certain countries from sidestepping restrictions via foreign subsidiaries and to safeguard against the misuse of its technology in military or intelligence operations that could jeopardize U.S. national security interests. Previously, some overseas offerings from major Chinese tech firms had incorporated Anthropic’s AI capabilities. Anthropic, a large-scale model startup established by ex-OpenAI members, focuses on developing trustworthy, interpretable, and manageable AI systems. Its flagship product, the Claude series, is celebrated for features like extensive context windows. This development underscores the growing fragmentation within the global tech ecosystem, suggesting that more firms may face difficult choices regarding technology services in the years ahead.