MiniMax M2.1 has made its official debut. As a cutting-edge, open-source coding and agent model boasting 10 billion active parameters, it is meticulously crafted to cater to real-world coding scenarios and the needs of native AI organizations, offering a versatile solution to meet a wide array of requirements.
In a series of authoritative benchmarks, including SWE-multilingual and VIBE-bench, MiniMax M2.1 showcased remarkable prowess, outperforming proprietary models such as Gemini3Pro and Claude4.5Sonnet. Notably, in the VIBE-Bench evaluation, it provided developers with a rich repository of testing references.
During the launch event, the MiniMax team extended heartfelt appreciation to their early testing partners and other contributors. It was their invaluable support and constructive feedback that fueled the ongoing refinement of the model.
Interestingly, just one day prior to MiniMax M2.1's release, GLM also made its official entrance. Both models attained comparable scores in the SWE-Bench test, collectively highlighting the formidable capabilities of open-source models.
In the realm of multilingual programming, MiniMax M2.1 has also ascended to the pinnacle of SOTA performance, surpassing a multitude of similar competitors.
For more official introductions, please visit https://www.minimax.io/news/minimax-m21.
