NVIDIA Declares: It Hasn't Turned Its Back on 64-bit Computing
2025-12-14 / Read about 0 minute
Author:小编   

In recent times, the meteoric rise of artificial intelligence (AI) has exerted a profound impact on the trajectory of chip development. NVIDIA has put AI performance front and center, spurring a steady decline in computational precision, shifting from FP64 and FP32 down to FP16, FP8, and even FP4. Take the Blackwell architecture as a case in point. Its highly touted NVFP4 standard, while keeping precision loss to a bare minimum, boosts the performance of the GB300 graphics card by 50%, slashes memory usage by two to three times, and enhances energy efficiency by up to 50 times. However, this strategic pivot has sparked concerns within the scientific computing community. FP64 performance has taken a nosedive, dropping from 34 TFLOPS in the H100 series to a mere 1.2 TFLOPS in the B300 series. This decline poses significant hurdles for scenarios that demand high-precision computing, such as climate modeling and fluid dynamics. Although NVIDIA has managed to enhance FP64 simulation performance by 1.8 times through its cuBLAS math library and has vowed to bolster core underlying computational capabilities in future GPUs, the academic world remains wary of the ongoing precision decline. This divergence in technological strategies underscores the deep-rooted conflict between the escalating demands for AI computational power and the delicate balance between precision and efficiency in traditional scientific computing.