Report: Samsung to Start Mass Production of HBM4 Ahead of Schedule

Samsung is expected to start mass production of HBM4 stacks as early as February. The main customer is Nvidia for AI systems.

listen Print view
Render image of Samsung's HBM3e memory modules

Render image of Samsung's chip stack, shown here as HBM3e.

(Image: Samsung Semiconductor)

3 min. read

According to a South Korean report, Samsung is set to begin mass production of new High Bandwidth Memory (HBM4) as early as next week. These are considered crucial components for AI accelerators announced for 2026, such as Nvidia's Vera Rubin and AMD's new Instinct systems.

South Korean news agency Yonhap News reports, citing sources close to Samsung, that the HBM4 stacks have passed Nvidia's tests. The GPU market leader has reportedly already placed orders. The report does not name competitor AMD, which is also relying on HBM4 for its new accelerators, as a potential Samsung customer.

Previously, it was assumed that Samsung would not be able to deliver the coveted high-bandwidth chip stacks until the second half of 2026. SK Hynix is the first and so far only company to achieve this, having done so since autumn 2025. The manufacturer, also from South Korea and a global market leader alongside Samsung, was therefore considered Nvidia's closest partner. As Yonhap previously reported, SK Hynix representatives are said to have recently met with Nvidia CEO Jensen Huang in the US to discuss further cooperation.

Compared to HBM3e, which has been predominantly used for AI GPUs so far, HBM4 offers significantly increased throughput and higher memory density. The JEDEC specifications stipulate 8 Gbit/s per pin, but SK Hynix announced six months ago that it could achieve “over 10 Gbit/s.” For an entire chip stack, this would mean at least a doubling of bandwidth compared to HBM3e, to well over 2048 GByte/s. The current reports do not specify how fast Samsung's HBM4 stacks will operate.

Videos by heise

The increasing focus of memory manufacturers on HBM and GDDR components is considered a significant factor for the enormous price increases for PC memory modules in recent months. Because hyperscalers like Alphabet (Google), Amazon, Meta, Microsoft, and OpenAI are willing to pay almost any price for their data centers, the production of AI-specific memory is more lucrative for chip manufacturers than that of PC components. Additionally, servers consume a considerable amount of classic RAM. Samsung's stock rose by a good five percent in South Korea following the report. The Global Depositary Receipt (GDR) for trading Samsung shares in the West does not reflect this development.

(nie)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.