In the memory industry, the three giants—SK Hynix, Samsung, and Micron—are in a fierce competition to expedite the development of 16-layer stacked high-bandwidth memory (16-Hi HBM) chips. They all have plans to start supplying NVIDIA in the fourth quarter of 2026, aiming to cater to the needs of NVIDIA's high-end AI accelerators. Currently, the 16-Hi HBM technology hasn't entered the commercialization stage, and its development is beset with a host of technical hurdles. These include the escalating complexity of DRAM stacking, the necessity to reduce wafer thickness, intense competition in bonding processes, and challenges related to heat dissipation.
Based on JEDEC standards, the overall thickness of HBM4 is capped at 775µm. However, for 16-layer stacking, the wafer thickness needs to be compressed from 50µm to approximately 30µm, which significantly ramps up the processing difficulty. Moreover, the thickness of the bonding material must be trimmed down to less than 10µm. After achieving such extreme thinness, how to effectively dissipate heat becomes a major technical obstacle that all three companies need to tackle.
