On April 16, it was reported that Qwen3.6-35B-A3B has been open-sourced. This model adopts a Sparse Mixture of Experts (MoE) architecture, with a total of 35 billion parameters, and only 3 billion parameters are activated during each inference. Despite its relatively small number of parameters, it delivers powerful performance. In terms of agent programming, it significantly surpasses its predecessor, Qwen3.5-35B-A3B, and can compete with larger dense models such as Qwen3.5-27B and Gemma-31B.
