According to the official announcement from DeepSeek, on December 1, the company officially unveiled two innovative models: DeepSeek-V3.2 and its high-performance variant, DeepSeek-V3.2-Speciale. The V3.2 model is meticulously crafted to strike an optimal balance between inference prowess and output length, rendering it an ideal choice for everyday applications, including question-and-answer sessions and general agent tasks. Its inference capabilities are on par with those of GPT-5, yet it generates shorter outputs, thereby minimizing computational demands and reducing user wait times. In contrast, the V3.2-Speciale model is engineered to push the boundaries of inference capabilities to their utmost limits, integrating the theorem-proving capabilities of DeepSeek-Math-V2. This model has clinched gold medals in numerous international contests. However, owing to its elevated costs, it is currently accessible solely for research endeavors.
