On March 6, 2026, the YuanLab.ai team open-sourced the Yuan3.0 Ultra multimodal foundational large model. This model, with trillion-level parameters, adopts a unified multimodal architecture consisting of a visual encoder, a language backbone network, and a multimodal alignment module. The language backbone network is based on a Mixture of Experts (MoE) architecture and optimized through the LAEP algorithm, improving pre-training computational efficiency by 49% with 68.8B activated parameters. Additionally, it introduces the LFA mechanism to enhance model accuracy, demonstrating outstanding performance in enterprise-level tasks. The model is now fully open-sourced.
