Samsung Predicts Profit Slump As Its HBM3e Apparently Continues To Underwhelm Nvidia

Analysis During the AI gold rush, the next best thing to selling the shovels – that is, the GPUs –is manufacturing the silicon that makes them possible. But while TSMC and SK-Hynix continue to cash in on Nvidia's successes, Samsung hasn't been nearly so fortunate.

The Korean giant is the world's largest supplier of memory modules but has so far struggled to secure Nvidia's signoff to use its latest-generation high bandwidth memory (HBM3e) in Nvidia's highest-end AI accelerators and Blackwell GPUs.

The company today posted guidance for its second quarter results and predicted revenue of approximately ₩74 trillion ($53.8 billion) and operating profit of ₩4.6 trillion ($3.3 billion)

That profit is considerably worse than the ₩6.3 trillion prediction financial analysts previously shared with Reuters. It will be Samsung’s lowest profit since Q4 2023’s ₩2.43 trillion ($1.75 billion) and a nasty dip from the ₩6.69 trillion ($4.9 billion) it posted in Q1 2025.

Memory is a commodity market, so buyers seldom deal with a single vendor. To this end, Samsung has been working to qualify its "enhanced" HBM3e stacks for use in Nvidia's flagship chips. These modules have been in customer hands for a while, Samsung CFO Soon-Cheol Park told analysts during the company's Q1 earnings call in late April.

"Clients have been deferring demand ahead of the planned launch of our enhanced HBM products," he said at the time.

While Samsung may be sending samples to customers, that doesn't mean its products meet expectations. In fact, Korean press, citing securities analysts, recently reported that Samsung failed for a third time in June to qualify the silicon for use in Nvidia products, with its next shot set for September.

Samsung's challenges bringing HBM3e to market have been a boon for rival SK Hynix, which has become Nvidia's primary supplier of the memory. Micron has also managed to seize the moment with its 12-high HBM3e stacks slated for use in Nvidia's upcoming Blackwell Ultra B300 and GB300 accelerators.

We don't know exactly why Samsung's memory isn't qualifying, but it's reportedly related to heat and power consumption.

HBM manufacturers assemble memory modules by stacking DRAM vertically, an approach that improves bandwidth.

In the case of the HBM3e found in Nvidia's Blackwell GPUs, each of the eight Chiclet-sized modules provides between 24GB and 36GB of capacity and up to 1TB/s of bandwidth. By comparison a stick of MRDIMM 8800 — currently the fastest non-HBM server memory available — offers substantially higher capacity but manages just over 68GB/s of bandwidth.

As you might expect, HBM comes with some tradeoffs. Cramming all that bandwidth into such a small package means these chips require more power and often run hot. Further complicating things, these modules typically rely on advanced packaging techniques, like TSMC's CoWoS, to integrate into a GPU.

Samsung’s apparent difficulty meeting Nvidia’s expectations is good news for AMD - with HBM in such high demand, it seems the House of Zen is willing to take what its larger rival will not.

According to multiple reports, AMD will source at least some of the 12-high HBM3e memory used in its recently announced MI350-series GPUs from Samsung. We've asked AMD to confirm those reports but had not heard back at the time of publication.

While Samsung may be able to sell some of its HBM3e to AMD, the chip designer's Instinct accelerators have paltry market share compared to Nvidia. For reference, Nvidia's GPU sales drove its datacenter revenues to $115.2 billion last year, while AMD's Instinct accelerators account for a little over $5B to the company's $12.6 billion in datacenter sales.

Further complicating things for Samsung, the industry is already moving on from HBM3e. Both AMD's MI400-series and Nvidia's Rubin GPUs will make the move to HBM4 next year.

Samsung expects to begin mass production of HBM4 modules in the second half of this year, and believes revenue from the products will start to flow in 2026. ®

RECENT NEWS

From Chip War To Cloud War: The Next Frontier In Global Tech Competition

The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more

The High Stakes Of Tech Regulation: Security Risks And Market Dynamics

The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more

The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics

Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more

The Data Crunch In AI: Strategies For Sustainability

Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more

Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser

After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more

LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue

In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more