SK Hynix Cranks Up The HBM4 Assembly Line To Prep For Next-gen GPUs
AMD and Nvidia have already announced their next-gen datacenter GPUs will make the leap to HBM4, and if SK Hynix has its way, it’ll be the one supplying the bulk of it.
On Friday, the South Korean memory giant announced that it had wrapped HBM4 development and was preparing to begin producing the chips in high volumes. The news sent SK’s share price on a 7 percent rally and for good reason.
High Bandwidth Memory (HBM) has become an essential component in high-end AI accelerators from the likes of Nvidia, AMD, and others. Both Nvidia’s Rubin and AMD’s Instinct MI400 families of GPUs, pre-announced earlier this year, rely on memory vendors having a ready supply of HBM4 in time for their debut in 2026.
The transition comes as the GPU slingers run up against the limits of existing HBM technologies, which currently top out at around 36 GB of capacity and about 1 TB/s of bandwidth per module, giving chips like Nvidia’s B300 or AMD’s MI355X about 8 TB/s of aggregate memory bandwidth.
With the move to HBM4, we’ll see bandwidth jump considerably. At GTC in March, Nvidia revealed its Rubin GPUs would pack 288 GB of HBM4 and achieve 13 TB/s of aggregate bandwidth. AMD aims to cram even greater quantities of memory onto its upcoming MI400-series GPUs, which will power its first rack-scale system called Helios.
As we learned at AMD’s Advancing AI event in June, the parts will pack up to 432 GB of HBM with an aggregate bandwidth approaching 20 TB/s.
SK Hynix says that it has effectively doubled the bandwidth of its HBM by increasing the number of I/O terminals to 2,048, twice what we saw on HBM3e. This, it argues, has also boosted energy efficiency by more than 40 percent.
While the DRAM typically in servers isn’t usually a major energy consumer, HBM is. With the shift from 24 GB on AMD's MI300X to the 36 GB modules found on the MI325, power consumption jumped from 250 W to roughly a kilowatt per GPU.
SK Hynix says that, along with more I/O terminals and improved efficiency, its chips have also managed to exceed the JEDEC standard for HBM4, achieving 10 Gb/s operating speed.
- Intel talent bleed continues as Xeon chip architect heads for the escape hatch
- Nvidia's context-optimized Rubin CPX GPUs were inevitable
- AI chip startup d-Matrix aspires to rack scale with JetStream I/O cards
- Uncle Sam doesn't want Samsung, SK Hynix making memories in China
Which of the three big HBM vendors will end up supplying these chips remains to be seen. While SK Hynix has won the majority of Nvidia’s HBM business over the past few years, Samsung and Micron are also working to bring HBM4 to the market.
Micron began sampling 36 GB 12-high HBM4 stacks to customers in June. Much like with SK Hynix, the stacks are using a 2048-bit interface and will achieve roughly twice the bandwidth of the HBM3e modules available today. The American memory vendor expects to ramp production of the stacks sometime next year.
Meanwhile, for Samsung, HBM4 presents a new opportunity to win over Nvidia’s business. The vendor has reportedly struggled to get its HBM3e stacks validated for use in the GPU giant’s Blackwell accelerators. ®
From Chip War To Cloud War: The Next Frontier In Global Tech Competition
The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more
The High Stakes Of Tech Regulation: Security Risks And Market Dynamics
The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more
The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics
Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more
The Data Crunch In AI: Strategies For Sustainability
Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more
Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser
After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more
LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue
In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more