Alibaba Unveils Qwen3-235B-A22B-Thinking-2507, Shattering Global Benchmarks for Both Open-Source and Closed-Source AI Models in a Week
6 day ago / Read about 0 minute
Author:小编   

Alibaba's Tongyi Qianwen team revealed last night the latest iteration of their Qwen3-235B-A22B inference model, dubbed Qwen3-235B-A22B-Thinking-2507. This model boasts an impressive 235 billion parameters, with 22 billion of them actively engaged, and is capable of processing up to 256,000 contexts. Across various capability evaluations—including programming, mathematics, knowledge reasoning, and alignment with human preferences—Qwen3-235B-A22B-Thinking-2507 rivals top-tier closed-source models like Gemini-2.5 Pro and o4-mini, while significantly outperforming open-source models such as DeepSeek-R1. This achievement sets a new standard for global open-source models, achieving state-of-the-art (SOTA) performance within a week of its launch.