On December 2 (local time), at the re:Invent 2025 conference, AWS introduced its third-generation AI training chip, Trainium3, which was developed in-house. Leveraging a 3-nanometer manufacturing process, this chip offers four times the performance, 40% higher energy efficiency, and four times the memory bandwidth compared to its predecessor. As a result, it can slash the costs associated with AI model training and inference by up to 50%. The chip's UltraServer system is capable of integrating 144 chips per unit, delivering a whopping 362 petaFLOPS (PFLOPS) of computing power and facilitating the construction of clusters comprising millions of chips. Additionally, AWS provided a sneak peek at its upcoming next-generation chip, Trainium4, which will incorporate NVIDIA NVLink Fusion technology.
