AI Computing Power Topic The server is a complete machine, and then step on the peak to open up a ne

Mondo Technology Updated on 2024-01-31

Shared todayAI computing power seriesIn-depth Research Report:AI computing power: The whole server machine will open up a new world and embrace new opportunities for computing power

Report Producer: Zhongtai**).

Report total: 37 pages.

Featured Report**: The School of Artificial Intelligence

Computing power: With computing power as the core, comprehensive capabilities include computing power scale, economic benefits, and supply and demand.

It is imperative for computing power to become a public basic resource, and "computing power is ubiquitous" is inseparable from the support of computing power networks.

With the full opening of the digital economy era, computing power is empowering thousands of industries in a new form of productivity, providing stronger support for high-quality economic and social development. However, the explosive growth of the demand for intelligent computing power has made high-performance computing power scarce, and the current computing power still has the problems of high application marginal benefit, unaffordability, and inability to use. In the future, the demand for computing power will increase, and the development of the industry will continue to maintain a high degree of prosperity.

China's computing power layout is in the stage of connecting the dots into a line and weaving a dense network. China continues to optimize infrastructure construction, strives to build data center clusters, and successively launches the Eastern and Western Computing Project and the construction of the National Supercomputing Internet, connecting computing resources across the country to form a national integrated computing network with unified scheduling, so that computing power can become a public basic resource with "one-point access and ready-to-use" like water and electricity, and realize greater value in the collaboration of various digital elements.

The global digital economy continues to accelerate, and the computing power industry is booming. In 2022, the total computing power of global computing equipment will reach 906eflops, with a growth rate of 47%. The computing power industry is booming, and it is expected that the global computing power scale will grow at a rate of more than 50% in the next five years, and the total computing power of global computing equipment will exceed 3 zflops by 2025.

The scale and supply level of computing power in China continue to grow, the depth of integrated applications continues to expand, and the benefits of industry empowerment are becoming increasingly apparent. According to the calculations of the Academy of Information and Communications Technology, the total scale of computing power of China's computing equipment will reach 302 eflops in 2022, accounting for about 33% of the global total, with a growth rate of more than 50% for two consecutive years, higher than the global growth rate. The demand for intelligent computing power is growing explosively, and the proportion of computing power in the scale will be higher and higher, with a compound growth rate of 52 in the next five years3%。

China's server market is developing rapidly. According to IDC data, the size of China's server market will be about 27.3 billion US dollars in 2022, a year-on-year increase of 91%。In 2022, the sales volume of China's server market will reach 44780,000 units, a year-on-year increase of 127%。

Due to the soaring demand for AI, global spending on complete machines is tilted towards AI servers, and the general-purpose server market is further compressed. In the future, with the acceleration of industrial intelligent transformation, the upgrading of high-computing power infrastructure, and the commercial development of application scenarios, major manufacturers will continue to increase the layout of AI large models, bringing broad development space to China's server industry.

The leading effect of China's server market is significant, and the competitive landscape is diversified. The domestic market is led by Inspur, Xinhua III and Super Fusion, with a high degree of industry concentration and a stable competitive landscape. The continuous emergence of emerging technologies such as AIGC and cloud computing has brought new growth points to the server industry, and major manufacturers are competing.

The total amount of global data and the scale of computing power are growing rapidly. According to IDC data, the global datasphere data volume will reach 103 in 202266ZB, China's data volume will increase from 23 in 202288ZB to 76 in 20276zb, CAGR reached 263%, the growth rate is expected to rank first in the world. According to IDC**, the world will add more data in the next three years than in the last 30 years combined, and the explosion of data will exponentially increase the amount of computing power required to store, transmit, and process data.

The intelligent upgrade of computing power has become a trend, and intelligent computing power has become the main driving force for the growth of computing power. The demand for massive and complex data processing puts forward higher requirements for computing power, and more powerful and efficient computing resources are required to support the development of artificial intelligence applications. Under this trend, the construction of computing infrastructure has been accelerated, which has become an "important foundation" to support the development of the digital economy, and the demand for data capacity and computing power has been cyclically enhanced. IDC**, the scale of China's intelligent computing power will continue to grow rapidly, and it is expected to reach 1117 in 20274eflops, CAGR of 33 in 2022-20279%。

The development and iteration of algorithms drive the demand for computing power. In traditional machine learning and deep learning technologies, computing power, as the underlying infrastructure, plays a crucial role, constantly promoting the iterative innovation of upper-layer technologies. At present, the development of deep learning has reached a climax, and the training and inference process of AI algorithms such as deep learning and machine learning requires a large amount of computing resources, and in order to achieve more complex AI tasks, including image and speech recognition, natural language processing, etc., computing power improvement is indispensable.

The rapid evolution of the model provides a strong impetus for the development of computing power. As large models continue to evolve, their parameter size and complexity increase significantly. At present, the training of super-large models requires the support of large-scale computing clusters and the corresponding model parallel algorithm framework, and the intermediate computing power demand is huge, so that the scale of computing power shows an upward trend of magnitude and exponentiality.

The computing power requirements of AIGC can be divided into training computing power requirements and inference computing power requirements. Deep learning tasks in the training and inference phases require a lot of computational resources.

Training side: According to OpenAI's calculations, after 2012, the training power demand of the world's top AI models has doubled in 3-4 months, and the annual computing power has increased by up to 10 times.

In terms of intelligent computing power, based on the estimation results of the OpenAI training cluster model, the ChatGPT-3 model parameters are about 174.6 billion, and the total computing power required for one training is about 3,640 pf-days (3,640 days for one quadrillion of calculations per second). The number of GPT-4 parameters introduced in 2023 may be expanded to 18 trillion, which is 10 times that of GPT-3, and the training computing power requirement rises to 68 times that of GPT-3, at 250,000 A100s require 90-100 days of training.

Take the Megatron Turing-NLG (MT-NLG) model jointly launched by Microsoft and NVIDIA as an example, the model has 530 billion parameters, consumes 4480 A100GPUs during the training process, and finally shows excellent performance in natural language processing tasks.

Inference: The process of using a trained model to perform calculations and using the input data to reach correct conclusions, which is generally the application of AI technology. The computing power of inference deployment mainly depends on the daily data throughput of each application scenario.

On the inference side, by openai**, the computing power requirement in the inference stage = 2 The number of model parameters Training and scale. Taking GPT-3 as an example, according to the estimation of e-Surfing Think Tank, the computing power requirement generated by 500 tokens (about 350 words) is 175 pflops。

In the future, the demand for AI servers will shift from the training side to the inference side, and the demand for inference computing power will develop exponentially. As the training model is improved and matured, the model and application products will gradually enter the production mode, and the proportion of AI servers handling inference workloads will increase. According to IDC data, in 2022, the market share of servers used for inference in China's data centers has accounted for more than half, reaching 585%, and the workload for inference is expected to reach 622%。

In 2023, ChatGPT will detonate a new round of artificial intelligence boom, and the era of AIGC driven by large models will officially begin.

The development of AIGC relies on large models, which drives the rapid growth of computing power demand.

AIGC is an accelerator for the computing power development market. The development of AIGC will continue to accelerate the construction of computing infrastructure with higher computing performance and faster interconnection performance.

As domestic and foreign AIGC manufacturers accelerate the deployment of large models with hundreds of billions of parameters, the demand for computing power will further increase, boosting the rapid growth of the AI server market and shipments.

Multimodality broadens the boundaries of applications and pushes the demand for computing power to rise significantly. Since the size of speech and image data is significantly higher than that of text, the computing power requirements for multimodal large model training and inference are higher than those of unimodal models. Google's embodied multimodal language model, PALM-E, has a total of 562 billion parameters, far exceeding GPT-3According to Patel and Nishball, Google's multimodal large model Gemini has a computing power of up to 1E26 FLOPS, which is 5 times the computing power required to train GPT-4.

Report total: 37 pages.

Featured Report**: The School of Artificial Intelligence

Related Pages