In the realm of autonomous driving technology, the debate persists between the camera-only approach and the multi-sensor fusion perception method, which integrates lidar. Currently, the majority of automakers opt for multi-sensor fusion perception solutions. These solutions amalgamate data from lidar, millimeter-wave radar, ultrasonic radar, and cameras to bolster the precision and safety of environmental perception.
However, Tesla has taken a different path, adhering to a camera-only solution. This approach relies exclusively on cameras and neural network algorithms to attain autonomous driving capabilities. Tesla contends that cameras present distinct advantages in terms of resolution, information volume, and dynamic response. Furthermore, they are more cost-effective compared to other sensor types. To achieve a comprehensive 360-degree view, Tesla deploys multiple cameras. These cameras, coupled with sophisticated algorithms, enable real-time identification and tracking of road targets. This, in turn, facilitates path planning and driving operations.
While multi-sensor fusion perception solutions demonstrate greater robustness in complex driving scenarios, Tesla maintains that the camera-only approach better aligns with actual driving requirements in terms of information richness and frame rate response. Additionally, with continuous algorithm optimization, Tesla anticipates that the performance of its camera-only system will keep enhancing. Looking ahead, it is plausible that the two approaches—camera-only and multi-sensor fusion—will converge and evolve, tailored to different application scenarios.
