On December 5, Tencent Hunyuan made an official announcement regarding the launch of its cutting-edge language models: Tencent HY 2.0 Think and Tencent HY 2.0 Instruct. These models are built on a Mixture of Experts (MoE) architecture. They boast a total of 406 billion parameters, with 32 billion parameters being activated, and can support a context window of 256K.
When compared to the preceding version (Hunyuan-T1-20250822), HY 2.0 Think has undergone substantial enhancements in terms of pre-training data and reinforcement learning strategies. As a result, it has attained industry-leading reasoning abilities and efficiency.
At present, these models have been successfully integrated into Tencent's AI-native applications, including Yuanbao and ima. Additionally, Tencent Cloud has made API and platform services available for the relevant models.
