On January 26, 2026, the National-Local Joint Engineering Research Center for Spatial Information Technology and Applications, working in tandem with Weitai Robotics, introduced the globe's inaugural cross-ontology visuotactile multimodal dataset, dubbed 'Baihu-VTouch'. This extensive dataset, spanning over 60,000 minutes, encompasses roughly 90.72 million pairs of contact samples from real-world objects. It spans more than 260 tasks across four primary scenarios.
