Tsinghua AIR Team Unveils Key Distinctions in Visual Attention Between Humans and Autonomous Driving Algorithms
13 hour ago / Read about 0 minute
Author:小编   

On February 21, the Tsinghua University Institute for AI Industry Research (AIR) team published a study titled "Human and Algorithmic Visual Attention in Driving Tasks" in npj Artificial Intelligence (February 2026). Focusing on the safety-critical domain of autonomous driving, this research marks the first instance of employing a dual-track methodology—integrating "human eye-tracking experiments with algorithmic comparative validation"—to thoroughly investigate the fundamental differences in visual attention between humans and algorithms. The study introduces a three-stage quantitative framework for analyzing human driving attention and reveals that the main limitation of algorithmic visual understanding stems from its inability to effectively extract "semantic saliency." By incorporating semantic attention mechanisms derived from human observation patterns, the study demonstrates a cost-effective approach to narrowing both the "semantic gap" in specialized algorithms and the "grounding gap" in large models, eliminating the need for extensive pre-training.