Apple has issued a thought-provoking paper that contends current AI reasoning models do not genuinely engage in reasoning but rather rely on pattern matching. The paper further critiques existing evaluation systems for overlooking the crucial quality of the thought process itself. To substantiate this claim, Apple devised four intricate puzzle environments specifically tailored to test the models' reasoning prowess. Their findings revealed a concerning trend: as problem complexity escalated, the models' depth of reasoning diminished, culminating in a significant performance decline past a critical threshold. The internet community has responded with varied reactions, ranging from satire aimed at Apple's perceived sluggish progress to outright denial, while others perceive the paper as a catalyst for fostering the development of more sophisticated reasoning mechanisms and evaluation methodologies.