AI firm Anthropic has announced that the recent deterioration in response quality noticed in the Claude model series has been rectified. The root cause of the issue was traced back to two technical malfunctions. These malfunctions occurred between August 5 and September 4, and again from August 26 to September 5. Importantly, these issues were not influenced by demand fluctuations or cost-cutting measures; they were purely coincidental. At present, all models that were affected are operating normally once again. The Anthropic team keeps a close eye on model quality, and the invaluable feedback from community users was instrumental in pinpointing and resolving the problems. The company has made it clear that the decline in performance was not due to any "dumbing down" of the models to cut costs. Furthermore, the incident had repercussions across multiple service platforms. With the issues now resolved, the models are performing at their usual high standards. Anthropic has underscored its dedication to technical excellence and responsibility, pledging to uphold the stability and efficiency of its services moving forward.