HBM hurricane , computing power breakthrough

Mondo Technology Updated on 2024-02-01

Written by |Tian Xiaomeng.

Edit |Lee Shin Ma.

Title Picture | ic photo

Thanks to the demand for AI computing power, HBM has become a sweet spot.

Recently, according to South Korea's Chosun Biz, NVIDIA has delivered an advance payment of 70 billion to 1 trillion won to SK hynix and Micron. Although the advance payment is not specified, it is generally believed that NVIDIA will need the corresponding amount of HBM3E to ensure the corresponding amount of HBM3E for the GPU products to be launched this year**.

Also in a market environment of surging demand, the three storage giants are also competing to promote HBM production. Recently, Samsung and SK hynix are planning to increase HBM production to 25 times, Micron will launch 8 layers of stacked AI-specific HBM3E memory this year. For HBM4, these three companies are also in charge, and have successively confirmed the time of development and official launch.

Trendforce estimates that the global HBM bit supply is expected to increase by 105% in 2024. The HBM market is also expected to reach US$8.9 billion in 2024, a year-on-year increase of 127%; The market size is expected to reach 127 by 2026US$400 million, corresponding to a CAGR of about 37%.

1. Who is driving the HBM market?

HBM (High Bandwidth Memory) is a high-performance DRAM based on a 3D stacking process, which is to stack many DDR chips together and then package them with GPUs. It is the near-memory computing based on the interposer interposer interconnection and the stacking technology of the TSV process that make HBM break the bottleneck of memory bandwidth and power consumption, and become the first choice for AI training hardware.

In fact, HBM has been used for several years, since the first through-silicon via HBM product came out in 2014, HBM technology has developed to the fifth generation, of which HBM3E is an extended version of HBM3, and so on, HBM4 will be the 6th generation product. From the perspective of the evolution of HBM memory, the latest HBM3 bandwidth, stack height, capacity, and IO rate have been increased by many times compared with the first generation.

Source: rambus

This is exactly what AI computing power needs.

According to OpenAI, the amount of computation used for the largest AI training has grown 10 times per year since 2012. OpenAI's ChatGPT fully reflects the growth of AI training datasets. GPT-3, released in November 2022, was built with 175 billion parameters, and GPT-4, released in March last year, uses more than 15 trillion parameters.

In response to this wave of AI, NVIDIA founder and CEO Jensen Huang said, "Large language model startups, consumer Internet companies, and global cloud service providers have all taken the lead, and the next wave is poised to kick in." "Of course, computing power is a key part of artificial intelligence, and artificial intelligence chip manufacturers play an important role.

NVIDIA's 2023 high-end AI lineup uses HBMs, including models like the A100 A800 and H100 H800. In addition, NVIDIA has further refined its product portfolio with the release of the H200 GPU with HBM3E memory, as well as the GH200 superchip, which is expected to be available in the second quarter of this year. It is understood that taking the GH200 superchip as an example, the HBM3E memory is 50% faster than the current HBM3 and can provide a total of 10TBS of bandwidth, which makes the NVIDIA GH200 Grace Hopper platform capable of running 3 times larger than the previous version5x the model with 3x faster memory bandwidth for improved performance.

AMD, which is fighting with NVIDIA's "fairy fight", officially released the AI flagship GPU accelerator AMD Instinct Mi300X at the "Advancing AI" event in December last year, and even claimed that the performance was 60% higher than NVIDIA's H100. It is understood that the MI300X uses HBM3 memory with a capacity of up to 192GB, which is 50% higher than the previous generation MI250X (128 GB). This memory will provide up to 53 TBS of bandwidth and 896 GB of Infinity Fabric bandwidth.

also wants to get a piece of the AI pie, as well as Intel. According to foreign media reports, Intel plans to release Gaudi 3 in 2024, equipped with up to 128GB of HBM3E RAM, which will greatly improve AI learning and training performance.

Who can become a replacement for NVIDIA still needs feedback from the market.

There is no doubt that this AI competition requires extremely high computing power demand, but due to limited production capacity, supply exceeds demand as the status quo, which in turn makes HBM** increase and the HBM market scale grows rapidly.

Source: Founder**.

The Founder** report pointed out that from the cost side, the average selling price of HBM is at least three times that of DRAM; Previously, driven by ChatGPT and limited production capacity, HBM **all the way**, compared with the highest performance DRAM, HBM3 is five times more powerful.

Second, the HBM market is divided into three parts

While NVIDIA, AMD, and Intel are fighting each other, storage manufacturers are not idle, and production expansion has become the main theme.

SK hynix has been a leader in the HBM industry from announcing the successful development of HBM1 in 2013, to developing the world's fastest high-bandwidth memory HBM2E in 2019, to developing the world's first HBM3 DRAM in October 2021, and developing the world's highest specification HBM3E last year.

It is worth noting that SK hynix's DRAM in the first quarter of 2023 turned from a profit to a loss after only two quarters.

SK hynix explained, "The company's performance continued to improve after the low point in the first quarter due to the increase in market demand for high-performance semiconductor memory products, especially the sales momentum of its core products such as HBM3, high-capacity DDR5 DRAM, and high-performance mobile DRAM, which are representative memories for AI, with a 24% increase in operating income and a 38% decrease in operating loss compared to the previous quarter. SK hynix said that it will focus on the production line conversion of ***10nm class (1A) and fifth-generation 10nm class (1B) DRAM, and at the same time expand investment in HBM TSV1 technology.

Source: SK hynix's financial report.

It is reported that SK hynix has decided to set aside about 10 trillion won (about 7.6 billion US dollars) in 2024 for facility capital expenditure, focusing on expanding facilities for high value-added DRAM chips, including HBM3, DDR5 and LPDDR5, as well as upgrading HBM's TSV advanced packaging technology. Jeong-Ho Park, CEO of SK hynix, revealed that the company expects to ship 100 million HBMs per year by 2030.

In addition, SK hynix's newly established AI Infrastructure Division will integrate the high-bandwidth memory (HBM) capabilities scattered within the company as part of the company's annual senior reshuffle. The division will also lead the development of AI technologies such as next-generation HBM chips, identifying and developing new markets. It is not difficult to see that SK hynix has high expectations for HBM.

According to Trendforce statistics, SK hynix accounted for 50% of the global HBM market size in 2022, followed by Samsung with 40%.

Both have benefited from the impact of the recovery of the storage market, and from the financial report data of Samsung Electronics in the third quarter of last year, there are signs of recovery. According to the data, Samsung Electronics' storage business increased by 17% quarter-on-quarter to 1053 trillion won, the DS division, where the storage business is located, suffered an operating loss of 375 trillion won, but an operating loss of 436 trillion won.

Source: Samsung's financial report.

In anticipation of future growth in the storage market, Samsung Electronics acquired some buildings and equipment at Samsung Display's Cheonan plant for HBM production. Samsung Electronics plans to build a new packaging line at the Cheonan plant for mass production of HBM, with an estimated investment of 700 billion to 1 trillion won. At the same time, Samsung is putting into mass production of 8-layer and 12-layer HBM3, and has started **8-layer HBM3E samples.

It is reported that NVIDIA's HBM has been exclusively owned by SK hynix, and due to the extremely huge demand, Samsung is said to have passed the quality assessment and will soon join the ** business queue. It is also reported that Samsung Electronics plans to start moving to Nvidia **HBM3 from January this year.

For the 2023 HBM market share, Trendforce**, SK hynix, and Samsung account for about 46-49% of the HBM share. This data also highlights that SK hynix and Samsung are evenly matched.

If we talk about the "world" of the HBM market, the last company is Micron. Although Micron's market share in 2022 is only 10%, it is not far behind. Micron Technology's Taichung Plant 4 is specializing in HBM production, and plans to mass-produce the fifth-generation HBM3E products this year.

As far as the chain is concerned, the industry, the three companies will compete fiercely for the HBM3E** required by Nvidia's upcoming H200 and B100.

Summary

Intelligent applications such as intelligent voice, ChatGPT, and intelligent driving have gradually penetrated into our lives, bringing a huge demand for memory chips. In the second half of last year, the first signal of the memory chip market allowed industry insiders to see the "dawn".

From the perspective of the storage industry chain, SK hynix, Samsung and other major manufacturers have adopted the IDM model to single-handedly handle the design, manufacturing, packaging and testing of chips. In the domestic market, the storage industry chain is still relatively marginal, mainly in the upstream equipment and materials of HBM. However, domestic related HBM concept stocks have also benefited. It is reported that relevant manufacturers are also actively deploying high-end storage products in order to adapt to market demand and technological evolution.

Related Pages