On September 12, Guotai Junan Securities issued a research report, highlighting that NVIDIA's next-generation Rubin CPX achieves the decoupling of AI inference computing workloads at the hardware level. This advancement is further bolstered by memory upgrades that enable faster data transmission.
As computing speeds soar, the average capacity per device of DRAM and NAND Flash in AI-driven applications—spanning smartphones, servers, and laptops—has witnessed a notable uptick. The server sector, in particular, has seen the most significant growth. Specifically, the average capacity per device of Server DRAM is projected to surge by 17.3% year-on-year in 2024.
Driven by the unrelenting demand for AI servers, coupled with the sequential launch or mass production of NVIDIA's next-gen Rubin chips and self-developed ASIC chips by cloud service providers (CSPs), DRAM products tailored for high-speed computing are poised for a dual boost in both volume and price. Hence, it is prudent to keep a close eye on the memory module market.