Weibo Unveils Its Maiden Self-Developed Open-Source Large Model: Vibe Thinker
2 day ago / Read about 0 minute
Author:小编   

On November 18, Weibo proudly announced the official launch of its inaugural self-developed open-source large model, VibeThinker. Despite being a lightweight model with a mere 1.5 billion parameters, it surpassed the performance of the DeepSeek R1 model, which is a hefty 671 billion parameters in size, in benchmark tests tailored for premier international math contests. Additionally, the cost for a single round of post-training for VibeThinker is a surprisingly low $7,800, marking a significant reduction—by several dozen times—compared to the expenses incurred by models such as DeepSeek-R1 and MiniMax-M1. At present, VibeThinker remains in its experimental phase, with its research and development efforts primarily directed towards bolstering the proficiency of small models in tackling complex mathematical problems and competitive programming tasks. While it may not be the ideal choice for everyday casual chats just yet, it finds its niche in high-intelligence application domains like mathematics and coding.