Open-Source Release of Yuan 3.0 Flash Multimodal Foundational Model
2025-12-31 / Read about 0 minute
Author:小编   

The team at YuanLab.ai has officially made the Yuan 3.0 Flash multimodal foundational model available as open-source software. This model boasts an impressive 40 billion parameters and utilizes a Sparse Mixture of Experts (MoE) architecture. In practice, during each inference process, only around 3.7 billion parameters are activated. Leveraging innovative reinforcement learning training techniques, namely Reinforcement Learning with Adaptive Policy Optimization (RAPO), and a Reflective Inhibition Reward Mechanism (RIRM), the model not only achieves heightened reasoning accuracy but also markedly cuts down on token usage during inference. This translates to substantial reductions in computational expenses. When applied in enterprise settings, such as Retrieval-Augmented Generation (RAG), multimodal retrieval tasks, comprehension of multimodal tables, and summarization generation, Yuan 3.0 Flash surpasses the performance of GPT-5.1, underscoring its distinct edge in real-world business scenarios. Moreover, the model is entirely open-source; all its parameters and underlying code can be freely downloaded and utilized, empowering the community to engage in secondary training and tailor the model to specific industry needs.