Claude Code, an AI programming tool developed by Anthropic, is currently grappling with a reputation crisis. Stella Laurenzo, the leader of AMD’s AI team, posted an analysis report on GitHub. Drawing on a quantitative analysis of tens of thousands of conversation logs, she highlighted that since February of this year, Claude Code has undergone systemic capability deterioration, with its thinking depth plummeting by a striking 67% and the model exhibiting abnormal behavior. This report has ignited widespread debate within the developer community, with numerous users corroborating that Claude Code’s performance has indeed taken a hit.
In response, Anthropic stated that the hidden thinking content feature does not impact the reasoning logic. They suggested that the “adaptive thinking” mechanism introduced in February, along with adjustments to the default effort level, might be responsible for the observed issues. The company advised users to manually revert to the high-intensity thinking mode. However, this explanation has done little to alleviate concerns.
Data indicates that Claude Code has not only suffered a decline in thinking depth but has also witnessed a shift in tool usage patterns, marked by an uptick in undesirable behaviors. Laurenzo contends that the “hidden thinking content” feature obscures the model’s true degradation. Despite Anthropic’s official stance that the problem stems from configuration settings rather than model deterioration, they have yet to dispel external skepticism.
The decline in capabilities has also led to increased usage costs, prompting some users to switch to alternative solutions. At present, developers are devising temporary workarounds, while Laurenzo has put forward more comprehensive improvement proposals.
