As reported by SiliconANGLE, Google has just rolled out a specially designed Ironwood tensor processing unit (TPU) for its cloud clientele. This cutting-edge chip can be scaled up to an impressive 9,216 units per pod. It boasts a remarkable 9.6Tbps inter-chip interconnection bandwidth and a whopping 1.77PB of shared high-bandwidth memory. These features collectively make it Google's most potent AI accelerator yet. The Ironwood chip marks a substantial leap in FP8 computing prowess, delivering performance that's a staggering 10 times greater than that of the TPU v5p and 4 times that of the TPU v6e. It has already been integrated into models like Gemini and Claude, showcasing its real-world applicability and power.
