Anthropic has encountered its second significant data blunder in just one week. On March 31 (local time), the npm package for version 2.1.88 of its core AI programming tool, Claude Code, inadvertently included debug files. These files contained a staggering 512,000 lines of unobfuscated raw code, which quickly spread across GitHub. Anthropic has acknowledged the incident, clarifying that it was caused by a human error during the packaging process and that no sensitive customer data was compromised.
Previously, the company had already made headlines for exposing nearly 3,000 internal sensitive files due to a misconfiguration in its external content management system. Moreover, in February 2025, the preview version of Claude Code also experienced a partial code leak, stemming from the same security vulnerability.
Founded by former members of OpenAI, Anthropic is dedicated to developing the Claude series of AI models. This latest code leak could potentially shorten the time it takes for competitors to catch up, tarnish its 'safety-first' reputation, and highlight flaws in its security management practices. Looking ahead, Anthropic will need to address these vulnerabilities, regain trust, and implement a comprehensive security verification mechanism to strike a balance between technological confidentiality and industry transparency.
