Meta is actively pursuing a 'diversify-suppliers' approach in the realm of AI computing power. On one front, it has pledged to acquire millions of NVIDIA GPUs. Concurrently, it has inked a 'multi-billion-dollar' pact with Google to lease TPUs over the forthcoming years, aiming to foster the development of novel AI models. Moreover, discussions are underway regarding the potential procurement of TPUs for data centers as soon as next year. Meta intends to leverage TPUs for AI training endeavors, while also sealing a significant deal with AMD to operate existing models using its chips, alongside developing its proprietary inference chips.
Meta's strategic shift is precipitated by setbacks encountered in the development of its in-house training chips and prior challenges faced during the deployment of NVIDIA chips. Meanwhile, Google is escalating its rivalry with NVIDIA, with aspirations to bolster revenue through TPU sales and deploying diverse financial tactics to facilitate TPU externalization. Nevertheless, the promotion of TPUs is hindered by a multitude of factors. Google is tasked with juggling multiple objectives while confronting supply-side limitations, as TPUs and NVIDIA GPUs vie for TSMC's manufacturing capacity.
