On September 29, 2025, DeepSeek unveiled and made open-source its cutting-edge next-generation model, DeepSeek-V3.2-Exp, which boasts an impressive 685 billion parameters. Building upon the foundation of V3.1-Terminus, this model incorporates a groundbreaking sparse attention mechanism. This innovation is specifically designed to enhance both training and inference efficiency, especially for tasks involving long-text processing. On the very same day, Cambricon also made a significant announcement. The company revealed its successful adaptation of the DeepSeek-V3.2-Exp model and, in a move to foster open collaboration, open-sourced the source code of its vLLM-MLU inference engine.