On September 25, 2025, NVIDIA announced the open-sourcing of its generative AI facial animation model, Audio2Face. The open-sourced content includes the foundational model, a complete software development kit, and a training framework, aiming to facilitate rapid integration of intelligent virtual characters in the gaming and 3D application sectors. This model can drive virtual characters to generate precise lip movements and natural emotional expressions in real-time based on input audio. It supports two modes: offline rendering for pre-recorded audio and real-time streaming for dynamic characters. It has already been applied in various scenarios, including game development, film and television production, and virtual customer service.