Anthropic’s Powerful AI Model Mythos Faces Exaggeration Claims: The Hype Over Thousands of Vulnerabilities Is Unfounded
4 day ago / Read about 0 minute
Author:小编   

Recently, Anthropic unveiled its Claude Mythos Preview, a large AI model, touting it as the most formidable of its kind. Given its potential global influence, Company A initiated the Project Glasswing program, partnering with various companies and institutions to allow limited access for the purpose of enhancing software security. After the model’s release, financial institutions worldwide convened experts to assess its possible impact. Anthropic highlighted Mythos’s primary strength as its ability to identify security vulnerabilities, citing the discovery of a flaw in the OpenBSD system that had eluded human detection for 27 years as a prime example. However, an investigation by Tomshardware has cast doubt on Anthropic’s assertion of uncovering thousands of security vulnerabilities, suggesting that this figure is inflated and derived from extrapolating findings from a small number of manual audit reports. Furthermore, the majority of vulnerabilities detected during testing were found in outdated software, with many turning out to be functional issues rather than genuine security threats. Additionally, Mythos has not been made accessible to the general public, likely due to cost considerations. Although Microsoft and Amazon Web Services have made it available, the price remains prohibitively high. As a result, Anthropic’s credibility in the field of programming large models has suffered, with the company often resorting to using the specter of AI posing a threat to humanity as a marketing ploy.