On the morning of April 3, 2026 (Beijing Time), Google DeepMind officially unveiled the latest iteration of its open-source model series, Gemma 4. This new series comprises four distinct variants: E2B, E4B, 26B MoE, and 31B Dense. These variants are designed to cater to a wide range of deployment scenarios, spanning from mobile edge devices to high-performance workstations. Notably, the 31B Dense model, boasting 30.7 billion parameters, has secured the third position on the Arena AI text leaderboard among global open-source models. Meanwhile, the 26B MoE model, with its 3.8 billion active parameters, outperforms competitors with significantly larger parameter counts, ranking sixth.
All Gemma 4 models are equipped with a 256K ultra-long context window and support for over 140 languages. They exhibit advanced reasoning abilities and complex code generation capabilities. These models also natively support video and image processing, function calling, and structured JSON output, facilitating seamless integration with external tools and APIs. The E2B and E4B versions further incorporate audio recognition functionality, enabling fully offline, low-latency operation on mobile devices.
A significant highlight of this release is the revision of the licensing agreement. Google has officially adopted the Apache 2.0 license, known for its business-friendly terms, replacing its previous proprietary protocol. This change empowers developers to freely modify, distribute, and commercialize the models without any barriers.
