Apple's research team has recently published a groundbreaking paper that sheds light on the limitations of mainstream large reasoning models (LRMs) in tackling intricate problems. The study reveals that as the complexity of the problem surpasses a critical threshold, the models' accuracy plummets to zero unexpectedly, coupled with instances of 'overthinking' and a paradoxical decline in the effort invested in reasoning.