Recently, the domestic AI unicorn enterprise MiniMax (Xiyu Jizhi) officially rolled out and open-sourced its latest-generation large text model, MiniMax-M2. This lightweight model boasts a mere 10 billion (10B) active parameters (with a total of 230B parameters). On the globally recognized benchmark Artificial Analysis (AA), it ranks among the top five models worldwide and takes the lead among open-source models, firmly placing itself in the global elite tier. In terms of pricing, its comprehensive cost API charges $0.3 (approximately 2.1 yuan RMB) per million tokens for input and $1.2 (approximately 8.4 yuan RMB) for output. This cost is less than 8% of that of Claude 4.5 Sonnet, while delivering nearly twice the inference speed. The M2 model has undergone in-depth optimization for coding and agent tasks. It comes equipped with powerful end-to-end development and execution capabilities, enabling the automatic debugging and repair of multiple code files. Its lightweight design ensures lower latency and costs, coupled with higher throughput efficiency. This makes it well-suited to meet the demands for efficient collaboration and rapid response in the emerging multi-agent workflows. Moreover, within two weeks of the model's release, MiniMax opened up free global API access. It also launched the domestic version, MiniMax Agent, which offers both 'Efficient' and 'Professional' modes to cater to the requirements of different scenarios.
