In the wee hours of the morning, Ant Group made a groundbreaking move by open-sourcing its first self-developed trillion-parameter large model, dubbed Ring-1T-preview. This milestone marks the debut of the world's inaugural open-source trillion-parameter inference large model. The preview iteration of the model showcases remarkable natural language reasoning prowess, excelling in tests and authoritative benchmarks like AIME 25 and CodeForces, while also demonstrating commendable performance in the IMO25 test. The Ant Bailing team is unwaveringly committed to advancing the post-training endeavors for the 1T language foundation model within the Ling2.0 family, with the official version of Ring-1T currently undergoing training.