CITIC Securities: Super Nodes Enhance AI Training and Inference Efficiency, with Three Key Growth Areas to Watch
5 day ago / Read about 0 minute
Author:小编   

The research report from CITIC Securities points out that super nodes tightly connect multiple accelerator cards through high-bandwidth, low-latency Scale-up networks, and significantly enhance AI training and inference efficiency with mechanisms such as memory pooling and direct memory access. This has become a clear industry trend. As vendors like NVIDIA drive efficiency improvements and capacity expansion in super nodes, growth opportunities will emerge in areas such as inter-GPU exchange chips, liquid cooling, and in-cabinet power supplies. Specifically, these areas will benefit from the pure incremental usage of Scale-up, increased penetration rates driven by high-power cabinets, and ASP improvements. It is projected that by 2028, the incremental market spaces in these areas will reach $100 billion, $13 billion, and $24 billion, respectively. Among them, the domestic substitution of exchange chips holds broad prospects, with a stable oligopoly shaped by their favorable commercial attributes. As the incremental space for inter-GPU connections expands, the domestic exchange chip market is expected to reach $5 billion by 2028. Currently, the Ethernet solution is emerging as the primary technological direction for exchange chips, with supporting domestic solutions already in place. The revenue scale of domestic leading Ethernet exchange chip companies is around 1 billion yuan, indicating significant growth potential and warranting attention. Additionally, the domestic substitution of optical interconnect CPO/NPO based on exchange chips is also worth monitoring.