At present, the global AI layout is accelerating, a large number of AI models are emerging, and the launch and iteration of models have greatly driven the demand for computing power in the upstream. With the rising demand for data load of domestic digital infrastructure, the growth of China's AI server market is accelerating.
From the perspective of the overall development trend, countries have intensively introduced policies related to large models, and the pace of data, computing power, and algorithm updates and iterations has accelerated, jointly driving the performance improvement of AI large models.
The computing power with AI server as the carrier is one of the terminal manifestations of artificial intelligence. The storage and transportation capacity linked to computing power will also be improved accordingly, which will drive the growth of hardware computing power chips (GPU, FPGA, AISIIC, etc.), memory chips, optical modules, PCBs, power management, heat dissipation, etc. Historically, the entire industry will see growth brought about by the increase in the penetration rate of AI servers.
According to TrendForce, it is estimated that in 2023, the shipment of AI servers (including GPUs, FPGAs, ASICs, etc.) will be nearly 1.2 million, an annual increase of 384%, accounting for nearly 9% of overall server shipments. Artificial intelligence
The increase in computing power directly drives the demand for servers. As the core support to solve the computing power, AI servers can be used to support local applications and web pages, and can also provide complex AI models and services for cloud and local servers.
AI servers use heterogeneous architectures, and the number of GPUs is much higher than that of ordinary servers.
From the perspective of server upstream costs, chips, memory, and hard disks account for nearly seventy percent of the server production costs. The GPU of the inference server accounts for 25% of the cost, and the GPU of the training server accounts for about 73% of the overall cost.
The main difference between AI servers and ordinary servers is that the architecture is different, AI servers use heterogeneous architectures such as CPU + GPU FPGA ASIC, while ordinary servers generally use CPU architecture; The number of GPUs varies greatly, and the number of GPUs used in a single AI server is usually more than 4.
The GPU architecture is the mainstream acceleration architecture and is the core cost component of the server. GPUs use parallel computing and are suitable for processing-intensive computing, such as graphics rendering and machine learning, and the increase in AI computing power requirements has further increased the computing speed and usage requirements of GPU cards.
Compared with general servers, the increase of AI servers is mainly due to the significant increase in AI chips and memory bandwidth, the corresponding value and other supporting components have also been upgraded to varying degrees.
According to the observation of the semiconductor industry, taking NVIDIA DGX H100 as an example, the value of GPU board groups (including HBM) accounts for the highest proportion, reaching 73%, followed by storage accounting for about 5%, of which DRAM accounts for 3% and NAND accounts for about 2%. Combined, GPUs and storage account for 90% of the total.
In addition, with the increase of computing power, the demand for heat dissipation has also increased correspondingly, and the introduction of liquid cooling technology is expected to bring new opportunities to the heat dissipation link.
Compared with general servers, AI servers increase the use of GPGPU, so the HBM usage is about 320 640GB with an NVIDIA A10080GB configuration of 4 or 8 frames.
In the future, the gradual complexity of AI models will stimulate more memory usage, and simultaneously drive the growth of demand for serverdRAM, SSD, and HBM.
AI server GPU cost accounts for a larger proportion:
According to statistics, in 2022, the four major cloud vendors in North America, Microsoft, Google, Meta, and AWS, account for about 66% of the total AI server purchases, and overseas cloud giants have a greater demand for AI servers, but with the development and application of domestic AI large models driving more AI server demand, China's AI server market space is expected to further increase.
In recent years, the wave of AI construction in China has continued to heat up, and ByteDance's annual procurement accounted for 62%, followed by Tencent, Alibaba, respectively. 5% vs. 15%。
According to IDC data, in terms of market share, Inspur Information, Xinhua.
3. Super fusion ranks among the top three shares of China's AI server market in 2022, and its market share proportions are as follows. 00%。
In 2022, Inspur AI server won 49 championships in MLPERF, the world's authoritative AI performance evaluation competition, providing strong impetus for AI R&D and application with comprehensive leading AI training and inference performance, and its AI server products have been applied to the world's leading Internet giants and leading technology companies in the fields of AI+Science, AI+Graphics, AIGC, etc., becoming the world's largest AI server provider.
Inspur Information Yingxin Server NF5688M6 and Configuration:
Sugon is a leading manufacturer of high-end servers in China. The company's high-end server products are full-stack self-developed and have large-scale deployment practices; We will continue to develop the computing power service business, accelerate the innovation and implementation of massive complex industry applications through the national integrated computing service platform, and provide computing power support for multiple large models in China.
AI server layout manufacturers also include China Great Wall, Tuowei Information, Tongfang Co., Ltd., etc. There are many participating manufacturers in the upstream and downstream links, and some of the representative manufacturers include ZTE, iFLYTEK, Hikvision, 360, Montage Technology, Loongson Zhongke, Digital China, Haitian AAC, Donghua Software, etc.
At present, advanced technologies such as AIGC and cloud computing have brought tremendous changes and opportunities to various industries. In this context, as an important cornerstone to support large-scale data processing and high-performance computing, the demand and application scenarios of supercomputing are also showing a continuous growth trend.
The combination of supercomputing and cloud computing has brought new growth points to the server market. The elasticity and scalability of cloud computing make supercomputing resources more efficient to serve many fields such as scientific research, industrial design, simulation, and big data analysis. This combination not only improves computing efficiency, but also reduces the cost of use, so that more institutions and enterprises can enjoy the convenience brought by supercomputing.
At the same time, the AI high-end servers required by supercomputing have also become a new star in the server market. AI servers have higher performance, stronger stability, and better scalability, which can meet the strict requirements of supercomputing in terms of computing speed and data processing capabilities. With the continuous popularization and deepening of supercomputing applications, the demand for such high-end servers will continue to rise, which is expected to drive the entire server market to achieve further growth.
Pay attention to Leqing Think Tank and gain insight into the industrial pattern!