On March 31 (local time), Anthropic faced a significant setback as a substantial portion of its source code was inadvertently disclosed. Roughly 512,000 lines of code from its core AI programming tool, Claude Code, were exposed to the public. This breach encompassed 1,906 pivotal source files and over 40 tool modules. The root cause of the leak was traced back to an npm package bundling mishap, which mistakenly incorporated debug files brimming with comprehensive source code mapping details into the production release package. Fortunately, model weights and user data remained secure, but crucial elements like the system's architecture, prompts, and tool invocation mechanisms were laid bare.
The leaked code rapidly proliferated across GitHub, garnering attention with over 10,000 stars and more than 20,000 backups. Anthropic promptly addressed the situation, attributing the incident to human error during the release bundling process rather than a security flaw, and vowed to implement measures to avert future occurrences. This marks the second major data blunder for Anthropic in the span of a week, following a prior episode where nearly 3,000 internal sensitive files were accidentally released. The back-to-back data breaches have tarnished Anthropic's 'security-first' reputation, shedding light on the gaps in its security management amidst its swift expansion.
