Google’s Open-Source Large Model Gemma 4 Poised for Launch: Parameter Count to Quadruple
1 week ago / Read about 0 minute
Author:小编   

Google DeepMind has teased the upcoming release of its next-generation open-source large model, Gemma 4, marking exactly one year since the debut of its predecessor. Gemma 4 is set to feature a substantial increase in parameter scale, with the introduction of a large model boasting 120 billion parameters. Leveraging a Mixture of Experts (MoE) architecture, this new model is anticipated to operate seamlessly on consumer-grade graphics cards. Notably, its context processing capabilities are expected to improve by 100% to 200%, coupled with enhanced logical reasoning and complex task execution abilities.

To counterbalance the dominance of Chinese companies in the open-source ecosystem, Google has strategically timed the release of the open-source version of Gemma 4, launching it more than six months after the main closed-source model. Gemma 4 is primarily targeted at localized services, aiming to compete with domestic open-source models.

With the introduction of Gemma 4, the competitive bar for open-source large models is set to rise even higher. Whether it can outperform domestic open-source models with similar parameter counts will undoubtedly become a key point of interest for the global AI community in the latter half of the year.