Ant Makes Two Core Robot Models Open-Source to Propel Embodied AI Progress
2 week ago / Read about 0 minute
Author:小编   

Official announcement from AntTech: On January 28, 2026, Ant Lingbo Technology made a significant move by fully open-sourcing its LingBot-VLA embodied large model and LingBot-Depth spatial perception model, along with the associated code. The LingBot-VLA model has undergone pre-training using over 20,000 hours of real-world robot data. It boasts the capability of cross-platform and cross-task transfer. When compared to mainstream frameworks, it achieves training efficiency that is 1.5 - 2.8 times higher. This remarkable efficiency substantially cuts down on post-training costs. The LingBot-Depth model, on the other hand, is dedicated to improving environmental depth perception and three-dimensional spatial comprehension. It stands out in complex scenarios, especially those involving transparent or reflective objects. This open-sourcing endeavor is strategically designed to lower the barriers to research and development within the industry and to spur advancements in robotics technology.