Next - Gen HBM Memory Set to Incorporate Embedded GPU Cores
2025-11-26 / Read about 0 minute
Author:小编   

Technology firms are spearheading a design revolution in High - Bandwidth Memory (HBM). They are making plans to directly integrate GPU cores into the memory stacks of the next generation. Both Meta and NVIDIA are currently in the process of evaluating a 'custom HBM' architecture. This innovative design suggests embedding GPU cores right at the base of HBM devices. Moreover, SK Hynix and Samsung have already joined in the preliminary discussions.

The main goal of this transformation is to significantly boost memory bandwidth and cut down on latency. By integrating GPU cores with HBM, it can better cater to the ever - growing demands of artificial intelligence (AI) and high - performance computing.

As we know, HBM is a core element of AI computing power. So, its technological evolution holds immense importance for data centers and AI applications. SK Hynix, Samsung, and Micron are all stepping up their efforts in developing HBM4. They are exploring customized approaches to meet the various requirements of AI chip manufacturers.

The roadmap for HBM technology covers several key areas, including bandwidth improvement, capacity expansion, and architectural innovation. There's a clear trend towards customization, which is expected to further expand the HBM market. This expansion is crucial as it will address the urgent need for high - bandwidth and low - latency memory in the fields of AI and high - performance computing.