Renowned AI expert @iruletheworldmo has disclosed that following the triumphant debut of the R1 model, DeepSeek is poised to introduce the R2 model, marking a substantial performance upgrade. Leveraging Huawei's Ascend 910B chip cluster, the forthcoming Atlas 900, and DeepSeek's proprietary distributed training framework, this latest iteration boasts an impressive accelerator utilization rate of 82%, delivering a staggering 512 PetaFLOPS of FP16 performance—approximately half an exaFLOP of computing power.
