On February 9, the open-source project page on Hugging Face—the world’s largest AI open-source community—revealed that Qwen3.5 had submitted a new code merge request into the Transformers library. Industry insiders speculate that Alibaba’s next-generation foundational model, Qwen3.5, is on the brink of release. According to reports, Qwen3.5 incorporates an innovative hybrid attention mechanism and may be a Vision-Language Model (VLM) with native support for visual comprehension. Developers have further uncovered that Qwen3.5 is likely to open-source at least two variants: a dense model boasting 2 billion parameters and a Mixture-of-Experts (MoE) model with 35 billion to 3 billion active parameters.
