Based on Google's official announcement, a groundbreaking study has been greenlit for presentation at NeurIPS 2025. The research team has introduced a novel method known as Nested Learning, which is specifically crafted to tackle the persistent challenge of 'catastrophic forgetting' in the realm of machine learning. In the English - speaking research context, "catastrophic forgetting" is a well - known term. It refers to the phenomenon where a machine - learning model, when trained on new data or tasks, tends to lose the knowledge it has previously acquired about old data or tasks. This new method approaches the model as a collection of inter - nested sub - optimization problems. Each sub - problem operates with its own distinct workflow and information flow, enabling the collaborative preservation of both existing (old) and newly acquired knowledge. Nested Learning breaks free from the conventional paradigm that strictly separates model architecture from optimization algorithms. By implementing multi - timescale updates and incorporating a continuous memory system, it effectively mitigates the dilemma faced by AI models. This dilemma occurs when AI models, in the process of learning new tasks, end up losing the knowledge they had gained from previous tasks. The HOPE system, which has been developed using the Nested Learning framework, has showcased remarkable advantages in a variety of benchmark tests. For instance, it has achieved a 23% improvement in long - text retrieval accuracy and maintained a 98% retention rate of old task performance in continuous learning scenarios.
