Based on NVIDIA's official announcement, at the NeurIPS 2025 conference, NVIDIA made a significant move by expanding its open - source AI ecosystem. It introduced DRIVE Alpamayo - R1, the world's inaugural open inference Vision - Language - Action (VLA) model designed specifically for autonomous driving. Alongside this, it also presented physical AI tools like LidarGen and ProtoMotions3.
Moreover, NVIDIA didn't stop there. It also rolled out MultiTalker Parakeet, a multi - speaker speech recognition model. In addition, an audio security dataset was made public, and digital AI development kits such as NeMo Gym were also part of the unveiling.
For those interested in accessing these resources, the relevant models and datasets can now be found on popular platforms, including GitHub, Hugging Face, and NVIDIA Physical AI Open Datasets.
