Apple has recently released a research paper challenging the authenticity of current AI reasoning models' thinking capabilities. The paper argues that models like DeepSeek, o3-mini, and Claude 3.7 fundamentally rely on pattern matching, and the notion of them "thinking" is nothing more than an illusion. By creating controlled puzzle environments for testing, Apple's team discovered that these models' performance significantly declines in tasks of high complexity, and they are unable to engage in deep thinking, even with ample computing power. This stance has ignited debate within the industry, with some speculating that Apple is advocating for the development of more sophisticated reasoning mechanisms and evaluation frameworks.