On March 18, 2026, Xiru Technology rolled out its latest large-scale model, Minimax 2.7. This model showcased remarkable enhancements in programming prowess, encompassing code generation and program comprehension. Initially, the decision not to release it as open source right away stirred up conversations within the tech sphere. However, the team addressed this by admitting they had underestimated the volume of work needed for open-source readiness and pledged to finalize the open-source transition within three weeks of its initial release. True to their word, on April 12, 2026, Minimax 2.7 officially opened its source code to developers globally, joining the ranks of cost-effective, domestic open-source large models, much like Zhipu AI's GLM-5.1. Its all-around performance is on par with that of internationally recognized closed-source models such as Claude Opus and GPT-5.4 Pro. Nevertheless, it boasts lower inference and deployment expenses, presenting a more viable option for small and medium-sized developers, AI application teams, and individual researchers. Official benchmark tests reveal that Minimax 2.7 scored 56.22% in the SWE-Pro evaluation, nearing the pinnacle of the industry. In real-world applications on the OpenClaw platform, its response quality and logical consistency have seen notable enhancements compared to its forerunner, Minimax 2.5. Under the MMClaw multi-dimensional assessment framework, its performance is comparable to that of the newly launched Sonnet 4.6. Presently, domestic large models reign supreme in the open-source AI domain, with Alibaba's Tongyi Qianwen, Zhipu AI's GLM, Xiru's Minimax, and DeepSeek leading the pack. Notably, both GLM-5.1 and Minimax 2.7 are already open source, while the open-source plans for Tongyi Qianwen 3.6 Plus are still up in the air. The industry's attention is now riveted on the forthcoming DeepSeek V4, slated for release in late April 2026. This model is anticipated to support domestic AI hardware-software collaborative architectures and seamlessly integrate multimodal processing capabilities.
