On February 10, 2026, Tencent Hunyuan formally introduced HY-1.8B-2Bit, an exceptionally compact AI model tailored for consumer hardware. Boasting an equivalent parameter count of merely 0.3 billion and a memory footprint of around 600MB, this model occupies less storage space than the majority of mobile applications. Constructed using the inaugural industrial-grade 2Bit on-device quantization solution, the model undergoes compression from the HY-1.8B-Instruct model through 2-bit Quantization-Aware Training (QAT). This process reduces the parameter count to one-sixth of the original while achieving generation speeds that are 2-3 times faster on edge devices—all without compromising its full reasoning capabilities.
The model is designed to be compatible with computing platforms such as Arm and supports efficient operation on mobile devices equipped with Arm SME2 technology. It also offers substantial speed improvements on devices powered by MacBook M4 and Dimensity 9500 chips. Tencent Hunyuan has optimized the model's performance through data refinement, elastic stretching quantization, and innovative training strategies. Furthermore, the company plans to incorporate reinforcement learning and model distillation techniques to bridge the capability gap between low-bit and full-precision models, ensuring even greater efficiency and performance.
