The server market is changing: Intel and AMD are fighting fiercely, and domestic chips are emerging.
AI Server: High-Speed Development Opportunities and Industry Chain Analysis.
With the rise of AIGC, the AI server market will usher in rapid development opportunities.
The surge in demand for servers for training & inference has driven the growth of the AI server market.
The AI server industry chain mainly includes four links: chips, servers, software, and services.
AI server competitive landscape: Nvidia, AMD, Intel, and other vendors dominate. AIGC brings server transformation, training & inference brings incremental demand for servers, AI server market ushers in high-speed development opportunities, AI server industry chain analysis, and AI server competition pattern.
Server Hardware Cost Components:
Processor and chipset: about 50%.
Memory: Approximately 15%.
External storage: Approximately 10%.
Other hardware (IO cards, hard drives, chassis, etc.): Approximately 25%.
The main hardware includes processor, memory, chipset, IO (RAID card, network card, HBA card), hard disk, chassis (power supply, fan). Taking the production cost of an ordinary server as an example, CPU and chipset account for about 50%, memory accounts for about 15%, external storage accounts for about 10%, and other hardware accounts for about 25%.
The logical architecture of the server is similar to that of ordinary computers, but in order to meet the needs of high-performance computing, it has higher requirements in terms of processing power, stability, reliability, security, scalability, and manageability. As a result, many server manufacturers are investing significant resources in the development and production of servers to create servers that meet a variety of needs to cope with today's rapidly evolving digital world.
1.The global server market continues to grow, with shipments and revenue increasing by 6% and 17% year-over-year to $13.8 million and $111.7 billion in 2022, respectively, and is expected to maintain solid growth in the coming years.
2.The scale of China's server market has always maintained rapid growth, with a compound annual growth rate of 145%, and the market size is expected to increase to $30.8 billion in 2023.
3.China's server market has huge growth potential, thanks to the country's continuous investment in new infrastructure, as well as the rapid development of digital transformation and cloud computing, the demand for servers is growing.
4.With fierce competition among domestic and foreign server manufacturers and a constantly changing industry pattern, domestic server brands are expanding their market share by virtue of their cost-effective advantages, and are expected to become the main force in the growth of the server market in the future.
According to Counterpoint's Global Server Sales Tracker, in 2022, global server shipments will increase by 6% year-over-year to 13.8 million units. Revenue will increase 17% year-over-year to $111.7 billion. According to IDC and China Commercial Industry Research Institute, the size of China's server market will increase from $18.2 billion in 2019 to $27.3 in 2022$400 million, with a compound annual growth rate of 145%, it is expected that the size of China's server market will increase to $30.8 billion in 2023.
The tide dominates, Xinhua follows, super fusion breaks through, and ZTE rises strongly.
Inspur firmly occupies a leading position in the domestic server market. Xinhua has continued to develop steadily. The super-poly mutant army has risen to the top, ranking third. ZTE has risen strongly and entered the top five, showing strong competitiveness. According to the "Prelim China Server Market Tracking Report for the Fourth Quarter of 2022" released by IDC, Inspur leads in China, Xinhua ranks third, Super Fusion ranks third, and ZTE enters the top five.
Three-tier architecture of AIGC industrial ecosystem:
1.Upstream Base Layer:
The AIGC technology infrastructure layer is built based on the pre-trained model.
2.Middle layer:
Vertical, scenario-based, and personalized models and application tools.
3.Application Layer:
Content generation services such as text and audio for C-end users. The prototype of the AIGC industry ecosystem has emerged, which is presented as an upper, middle and lower layer architecture: The first layer is the upstream basic layer, that is, the AIGC technical infrastructure layer built on the basis of pre-trained models. The second layer is the middle layer, that is, vertical, scenario-based, and personalized models and application tools. The third layer is the application layer, that is, the content generation service for C-end users, such as text and audio.
The history of GPT.
Both the GPT family and the BERT model are well-known NLP models, both of which are based on Transformer technology. GPT is a generative pre-trained model that was first released by the OpenAI team in 2018.
GPT-1 has only 12 Transformer layers, while GPT-3 has increased the number of layers to 96. The evolution of the GPT model mainly includes:
Orders of magnitude increase in data volume and parameter volume: GPT-3 has several orders of magnitude more data volume and parameter volume than GPT-2.
Shift in training methods: GPT-1 uses a combination of unsupervised pre-training and supervised fine-tuning, while GPT-2 and GPT-3 use pure unsupervised pre-training. The development of GPT, the GPT family and the BERT model are both well-known NLP models, both based on Transformer technology. GPT, a generative pre-trained model, was first released by the OpenAI team in 2018, GPT-1 has only 12 Transformer layers, and by GPT-3, it has increased to 96 layers. Among them, GPT-1 uses a combination of unsupervised pre-training and supervised fine-tuning, while GPT-2 and GPT-3 are pure unsupervised pre-training, and the evolution of GPT-3 compared with GPT-2 is mainly an order of magnitude increase in the amount of data and parameters.
Heterogeneous computing may become mainstream in the future.
Heterogeneous computing is a computing method that combines computing units of different instruction sets and architectures into a system, including GPU ECSs, FPGA ECSs, and elastic accelerated computing instances (EAIS). The core idea is to let the most suitable dedicated hardware serve the most suitable business scenarios, maximize resource utilization and accelerate computing performance improvement.
Benefits of Heterogeneous Computing:
1.Performance improvements: Computing performance can be significantly improved by assigning computing tasks to the most appropriate hardware.
2.Energy efficiency optimization: By choosing the right hardware, you can reduce power consumption and improve energy efficiency.
3.Scalability enhancements: Heterogeneous computing allows for the use of different types of hardware, allowing for flexible scaling of computing systems.
4.Cost optimization: Heterogeneous computing can select the most cost-effective hardware based on business requirements to reduce costs.
If you choose heterogeneous computing, you can enjoy the advantages of professional hardware, such as improved performance, energy efficiency, and cost optimization, to accelerate business innovation and development. Heterogeneous Computing refers to the computing method that uses computing units of different types of instruction sets and architectures to form a system, which mainly includes GPU ECS, FPGA ECS, and elastic accelerated computing instance EAIS.
Let the most suitable dedicated hardware serve the most suitable business scenarios.
In the CPU + GPU heterogeneous computing architecture, the CPU and GPU work together to give full play to their respective advantages. The CPU is responsible for the logically complex serial programs, while the GPU focuses on data-intensive parallel computing programs, thus effectively improving computing efficiency.
The CPU and GPU are connected through the PCLE bus, and the CPU is responsible for coordinating task distribution while the GPU is responsible for processing data in parallel, which combines the advantages of the CPU and GPU to greatly improve computing performance. In the heterogeneous computing architecture of CPU + GPU, the GPU and CPU work together through the PCLE bus connection, and the location of the CPU is called the host, and the location of the GPU is called the device. Heterogeneous computing platforms based on CPU+GPU can complement each other, with the CPU responsible for processing logically complex serial programs, while the GPU focuses on data-intensive parallel computing programs to maximize efficiency.
More and more AI computing uses heterogeneous computing to accelerate performance.
In 2017, Alibaba released its first-generation compute GPU instance, GN4. It is equipped with NVIDIA M40 accelerator and provides services for AI deep learning scenarios through a 10 Gigabit network. Compared to its contemporaries, the GN4 delivers a nearly 7x performance improvement, making it one of the industry's leading GPU instances. The GPU instance was released in 2017 with the GN4 accelerator and the NVIDIA M40 accelerator. In the 10 Gigabit network, the performance of AI deep learning scenarios is nearly 7 times higher than that of CPU servers of the same era.
CPU: Powerful execution engine.
CPUs are suitable for a wide range of workloads, especially those that require high latency and performance per core. As a powerful execution engine, the CPU concentrates its relatively small number of cores on a single task and completes it quickly, especially for types of work such as serial computing and database running.
Advantages of CPUs:
Low latency. High performance per core.
Ideal for handling a single task.
CPU Application Scenarios:
Serial calculations. Database runs.
Other workloads with high latency and performance per core are suitable for a wide range of workloads, especially those with high latency and performance per core. As a powerful execution engine, the CPU concentrates its relatively small number of cores on a single task and completes it quickly. This makes it particularly suitable for handling types of work, from serial calculations to database runs.
GPUs started with graphics processing and were originally used for 3D rendering. With the evolution, the functions of the GPU have gradually changed from fixed to flexible and programmable. While graphics is still the main battlefield for GPUs, it has become a parallel processor that can handle more applications. The power of the GPU comes from its parallel computing power - when processing ** or other data, the GPU can break it down into multiple small chunks that can be processed simultaneously by the processor cores, greatly speeding up data processing. It was originally developed as an ASIC specifically designed to accelerate specific 3D rendering tasks. Over time, these fixed-function engines have become more programmable and flexible. While graphics and today's increasingly visually realistic top-of-the-line games remain the primary feature of GPUs, they have also evolved into more versatile parallel processors capable of handling a growing number of applications.
AI servers continue to grow as computing power infrastructure.
The AI server market has huge potential.
The demand for AI servers is expected to grow rapidly, benefiting from the increasing demand for computing power in the AI era.
In 2022, the annual shipment of AI servers equipped with GPGPU accounted for nearly 1% of the total servers.
ChatGPT-related applications are expected to promote the development of AI-related fields, and the annual growth of AI server shipments can reach 8%.
2022 In 2026, the compound growth rate of AI servers will reach 108%。
As a computing power basic device, the demand for servers is expected to benefit from the increasing demand for computing power in the AI era and grow rapidly. According to TrendForce, as of 2022, it is estimated that the annual shipments of AI servers equipped with GPGPU (General Purpose GPU) will account for nearly 1% of the total servers, and it is estimated that with the support of chatbot-related applications, it is expected to once again promote the development of AI-related fields, with an estimated annual growth rate of 8%. 2022 The compound growth rate in 2026 will reach 108%。
The AI server market is growing rapidly.
IDC**, China's AI server market will reach $5.7 billion in 2021, a year-on-year increase of 616%。The market size is expected to grow to $10.9 billion by 2025, growing at a CAGR of 175%。
Heterogeneous combination of AI servers to meet the needs of different applications.
AI servers have a heterogeneous architecture and can be combined in different ways. Such as CPU + GPU, CPU + TPU, CPU + other accelerator cards, etc., to meet the needs of different applications.
AI servers drive digital transformation.
AI servers are driving the digital transformation of various industries. Industries such as healthcare, finance, manufacturing, transportation, and retail have all seen the potential and value of AI and have begun to deploy AI servers to improve efficiency and service levels. The server is a heterogeneous server, which can be combined in different ways according to the application range, such as CPU + GPU, CPU + TPU, CPU + other acceleration cards, etc. IDC estimates that the market size of AI servers in China will be $5.7 billion in 2021, a year-on-year increase of 616%, the market size will grow to $10.9 billion by 2025, with a CAGR of 175%。
AI server composition and form.
Inspur NF5688M6 AI Server: Surging Computing Power, Leading the Future with Intelligence.
Inspur NF5688M6 AI server, equipped with 8 NVIDIA Ampere architecture GPUs, realizes GPU cross-node P2P high-speed communication interconnection through NVSockey, with surging computing power, and easily copes with various AI training and inference tasks.
Equipped with 2 third-generation Intel Xeon scalable processors (ICE Lake), supporting 8 2 pcs5-inch NVMe SSD or SATA SAS SSD and 2 sata M. onboard2. It can meet different storage needs. An optional PCIe 40 x16 ocp 3.0 network card, the rate supports 10G, 25G, and 100G, and the network connection is flexible and reliable.
Inspur NF5688M6 AI server is an ideal choice for you to build an AI system and help you succeed in the AI field.
Advantages at a glance: 8 NVIDIA Ampere GPUs with surging computing power can easily cope with various AI training and inference tasks.
NVSower realizes GPU cross-node P2P high-speed communication and interconnection, and data transmission is efficient and lossless.
2 third-generation Intel Xeon Scalable processors (Ice Lake) provide powerful performance and reliability.
Support 8 blocks 25-inch NVMe SSD or SATA SAS SSD and 2 sata M. onboard2. The storage capacity is flexible and expandable.
An optional PCIe 40 x16 ocp 3.0 network card, the rate supports 10G, 25G, and 100G, and the network connection is flexible and reliable. The main components of the server: Taking the Inspur NF5688M6 server as an example, it uses NVSoper to realize GPU cross-node P2P high-speed communication interconnection. The whole machine has 8 NVIDIA Ampere architecture GPUs, which realize high-speed communication and interconnection of GPUs across nodes and P2P through NVSower. Equipped with 2 third-generation Intel Xeon scalable processors (ICE Lake), supporting 8 2 pcs5-inch NVMe SSD or SATA SAS SSD and 2 SATA M2. 1 PCIe 40 x16 ocp 3.0 network card, the rate supports 10G, 25G, 100G;
10 pcie 40x16 slots, 2 pcie 40x16 slot can be slowed down to PCIe 40x8。
1 OCP 30 slots.
Supports 32 DDR4 RDIMM LRDIMM memory at speeds up to 3200mt s.
6 x 3000W 80Plus Platinum power supplies, N+1 redundant hot-swappable fans and chassis. 10 pcie 40x16 slots, 2 pcie 40 x16 slot (PCIe 4.)0 x8 rate), 1 ocp30 slots. It supports 32 DDR4RDIMM LRDIMM memory with a maximum rate of 3200MTS, and the physical structure also includes 6 3000W 80PLUS platinum power supplies, N+1 redundant hot-swappable fans, chassis, etc.
Gallop in the AI world, and the computing engine can be unlocked on demand.
4 GPUs (Inspur NF5448A6): Mainstream AI servers to meet basic AI training needs.
8 GPUs (NVIDIA A100 640GB): The performance is greatly improved, suitable for medium and large AI training tasks.
16 GPUs (NVIDIA DGX-2): A top-of-the-line AI server for complex AI model training and scientific research. The number of GPUs varies from 4 GPUs (Inspur NF5448A GPUs (NVIDIA A100 640GB) to 16 GPUs (NVIDIA DGX-2).
Core Components: GPU (Graphics Processing Unit): Responsible for complex math and graphics calculations.
DRAM (Dynamic Random Access Memory): Provides high-speed data access.
SSD (Solid State Drive): Provides fast storage.
RAID Cards: Manage multiple hard drives to improve storage performance and reliability.
Other components: CPU (**Processor): Coordinates the system and performs tasks.
Network Adapter: Connect to the server and network.
PCB: Connects all components and provides power.
High-speed interconnect chip (on-board): Enables fast communication between components.
Heat Sink Module: Keeps the system cool and prevents overheating. The core components of the server include GPU (graphics processing unit), DRAM (dynamic random access memory), SSD (solid state drive) and RAID card, CPU (** processor), network card, PCB, high-speed interconnection chip (in-board) and heat dissipation module.
Key vendors in the CPU and GPU segments:
CPU: Mainly by Intel**
GPU: The international giant Nvidia is leading, and domestic manufacturers Cambrian and Haiguang Information are also developing. The main suppliers are Intel, GPU, and the leading manufacturers are the international giant NVIDIA, as well as domestic manufacturers such as Cambrian and Haiguang Information.
AI server competitive landscape.
IDC released the "Prelim China Server Market Tracker Report for the Fourth Quarter of 2022", and super-convergence became the biggest winner, with a share of 32% surged to 101%。
Inspur is still in first place, with a share of 281%, but the decline is significant. Xinhua III followed with a share of 172%, also a slight decrease.
Dell and Lenovo ranked first.
Third, fourth, but the share has declined, respectively 158% and 123%。
ZTE has sprung up, with a share of 31% to 53%, jumping from ninth to fifth.
The excellent performance of Super Fusion and ZTE has broken the market pattern and injected new vitality into the competition. The "China Server Market Tracker Report for the Fourth Quarter of 2022" was released. As can be seen from the report, the top two waves have changed less from Xinhua III, and the third is Super Fusion, from 3The 2% share jumped to 101%, an increase far more than other server manufacturers. Among the top 8 server manufacturers, Inspur, Dell, and Lenovo have all declined significantly, while Super Fusion and ZTE have achieved significant growth. Among them, the share of the wave is from 308% down to 281%;Xinhua III share from 175% down to 172%;ZTE from 31% to 53%, ranking 5th in China.
Lenovo saw the most significant drop, from 75% down to 49%。
AI server procurement pattern: North American giants dominate, domestic wave rises.
Key data: In 2022, the four major cloud operators in North America (Google, AWS, Meta, and Microsoft) together accounted for 66% of AI server procurement2%。
The wave of AI construction in China is accelerating, and the proportion of ByteDance procurement has reached 62%, leading domestic manufacturers.
Tencent, Alibaba, and Baidu followed, with procurement proportions of about. 5% and 15%。
Insight: North American cloud giants dominate AI server procurement due to their massive data centers and computing needs.
Domestic manufacturers are accelerating their catch-up, and Internet giants represented by ByteDance are actively deploying in the AI field, driving the growth of the domestic AI server market.
Trend: With the development of AI technology and the continuous expansion of its application, the AI server market will continue to grow rapidly.
Domestic players are expected to further expand their market share by virtue of their cost advantages and in-depth knowledge of the local market. According to TrendForce, in 2022, the four major cloud companies in North America, Google, AWS, Meta, and Microsoft, accounted for 66% of AI server procurement2% is the most, and in recent years, with the intensification of localization in China, the wave of AI construction has increased, and the procurement power of ByteDance is the most significant, accounting for 62%, followed by Tencent, Alibaba and Baidu, respectively. 5% vs. 15%。
The competitive landscape of the domestic AI server market.
Inspur Information, Xinhua.
3. Competitors such as Super Fusion and ZTE lead the domestic AI server market.
GPGPU Research Framework and Computing Power Analysis (2023).
Core barriers: high-precision floating-point computing and CUDA ecosystem.
There is still a gap between domestic GPU products and foreign products, which is mainly reflected in high-precision floating-point computing capabilities and software ecology.
Computing performance: Domestic GPUs need to be broken through.
BICHEN BR100: FP32 single precision surpasses NVIDIA A100, but does not support FP64 double precision calculation.
Days Zhixin Tianyuan 100: FP32 single accuracy exceeds A100, but int8 integer calculation performance is lower than A100.
Haiguang DCU: supports FP64 double-precision floating-point calculation, and the performance is about 60% of that of A100.
Overall, domestic GPUs still lag behind foreign products by more than one generation in computing performance.
Software Ecology: CUDA dominates the world.
Nvidia occupies 90% of the global GPU market by virtue of the CUDA ecological barrier. Most domestic enterprises use open-source OpenCL for independent ecological construction, but it takes a lot of time to lay out. AMD began to build the GPU ecosystem in 2013, and it took nearly 10 years for the ROCM open software platform to gradually become influential and still compatible with CUDA.
The way to break through the domestic GPU: independent innovation and ecological construction.
Although there is still a gap between domestic GPUs and international manufacturers, domestic manufacturers are catching up.
Challenges and Opportunities: The Process of Localization under the Ban.
The U.S. ban on the sale of high-end GPUs in China has brought opportunities for domestic GPGPU and AI chip manufacturers to develop rapidly.
Short-term impact: Industrial progress is hindered.
The ban could affect sales of Nvidia and AMD's GPU products in China, hindering the progress of China's AI computing, supercomputing and cloud computing industries.
Long-term opportunity: Surge in demand for localization.
The huge domestic market and the information innovation market have brought about an increase in demand for localization, and it is expected that the proportion of localization of domestic AI chips will increase significantly.
Suggestions from domestic manufacturers: independent innovation and ecological construction.
Focus on achieving independent innovation and building an independent ecosystem. Domestic enterprises should:
Continue to improve computing performance and narrow the gap with international manufacturers.
Accelerate the construction of the software ecosystem and build a complete independent ecosystem.
Cooperate with domestic and foreign partners to jointly build a GPGPU localization ecosystem.
The core barriers of GPGPU are high-precision floating-point computing and CUDA ecosystem. From the perspective of high-precision floating-point computing capabilities, there may still be a gap of more than one generation between the computing performance of domestic GPU products and foreign products; At the software and ecological level, the gap with NVIDIA's CUDA ecosystem is even more obvious.
In the field of AI computing GPUs, the BR100 product released by Bichen Technology in China surpasses the NVIDIA A100 chip in FP32 single-precision computing performance, but does not support FP64 double-precision computing; The FP32 single-precision computing performance of the Tianyuan 100 launched by Tiantian Zhixin surpasses that of the A100 chip, but it is lower than the A100 in terms of integer computing performance; The DCU launched by Haiguang realizes FP64 double-precision floating-point calculation, but its performance is about 60% of that of A100, which is about the same as its level four years ago. Therefore, from the perspective of high-precision floating-point computing capabilities, there may still be a gap of more than one generation in computing performance between domestic GPU products and foreign products.
However, GPU not only needs to improve computing power in hardware, but also the software level is particularly important for GPU applications and ecological layout, and NVIDIA occupies 90% of the global GPU market with CUDA to build ecological barriers. At present, most domestic enterprises use open-source OpenCL for independent ecological construction, but this requires a lot of time for layout;
Compared with AMD's construction of the GPU ecosystem in 2013, it took nearly 10 years for the ROCM open software platform for general computing to gradually become influential, and it is still compatible with CUDA. Therefore, we believe that the gap between domestic manufacturers and NVIDIA's CUDA ecosystem at the software and ecological level is more obvious than computing performance.
Although there is still a gap between the computing performance and software ecological strength of domestic products and international manufacturers, domestic manufacturers are still catching up and striving to achieve a breakthrough in the localization of GPGPU.
In the long run, the U.S. ban on the sale of high-end GPUs to China has brought opportunities for rapid development to domestic GPGPU and AI chip manufacturers. In the short term, we believe that the ban on high-end general-purpose computing GPUs may affect the sales of NVIDIA and AMD GPU products in China, and the progress of China's AI computing, supercomputing and cloud computing industries will be hindered. It can be replaced by medium and high computing performance CPUs, GPUs, ASIC chips, etc., which have not been banned by NVIDIA and AMD and domestic manufacturers.
In the long run, domestic CPU, GPU, AI chip manufacturers benefit from the huge domestic market, superimposed on the domestic information and innovation market to bring an increase in localization demand, we expect that the localization ratio of domestic AI chips will increase significantly, take this opportunity to upgrade products, and gradually reach the international advanced level, ** For domestic manufacturers, it is recommended to focus on achieving independent innovation, creating an independent ecosystem, and domestic enterprises: the rise of domestic chip leading enterprises:
Loongson Zhongke: the leading PC CPU in China, the first to launch self-developed GPGPU products, helping to improve domestic graphics processing capabilities.
Haiguang Information: The leading server CPU in China, successfully launched the Deep Computing (DCU) chip to help improve the artificial intelligence computing capacity of the data center.
Jingjiawei: The leading GPU in domestic graphics rendering, it has made breakthroughs in the fields of games and image processing, and has enhanced the competitiveness of domestic chips in the field of graphics processing.
Cambrian: A leading ASIC chip company in China, focusing on the research and development of artificial intelligence chips, and creating high-efficiency and low-power AI chip products.
Montage Technology: A leading server memory interface chip in China, providing high-performance, low-power memory solutions to enhance the application of domestic chips in the field of data centers. Chips: Loongson Zhongke (domestic PC CPU leader, independent research and development of GPGPU products), Haiguang Information (domestic server CPU leader, launched deep computing processor DCU), Jingjiawei (domestic graphics rendering GPU leader), Cambrian (domestic ASIC chip leader), Montage Technology (domestic server memory interface chip leader); 2) PCB: Shenghong Technology, Xingsen Technology, Hudian Co., Ltd.; 3) Advanced packaging: Tongfu Microelectronics, Yongsi Electronics, Changdian Technology, Changchuan Technology, etc. Semiconductor giants are back on the rise:
NVIDIA: The global GPU hegemon, leading the revolution in artificial intelligence and autonomous driving.
AMD: CPU and GPU are both fierce and forge ahead, challenging the industry landscape.
Intel: CPU supremacy is solid, transforming AI and data centers.
Micron: A memory chip giant, leading the innovation of memory and flash memory technology. NVIDIA (the world's leading GPU), AMD (the world's leading CPU GPU), Intel (the world's leading CPU), Micron (the world's leading memory chip). GPUs: The building blocks that shape the digital world.
GPU overview.
The GPU (graphics processing unit) is an electronic circuit in a computer that is specialized in processing images and **. With its powerful parallel computing capabilities, GPUs play a vital role in computer graphics, deep learning, scientific computing, and more.
Global GPU market landscape.
The global GPU market presents a competitive pattern of "one super and one strong". With its strong technical strength and market influence, NVIDIA has long occupied a dominant position in the GPU market. AMD has emerged in recent years, gradually eating into NVIDIA's market share with its cost-effective advantage.
Nvidia's leading position in GPUs is solid.
Nvidia's position in the GPU market is very solid. Its GPU products are favored by consumers and business users for their excellent performance and reliability. Nvidia also maintains a leading position in emerging fields such as artificial intelligence and autonomous driving.
Domestic GPU manufacturers are gradually catching up.
Domestic GPU manufacturers have developed rapidly in recent years, and a number of competitive enterprises have emerged, such as Jingjiawei, Cambrian, Muxi, etc. These companies have made great progress in GPU chip design, algorithm optimization, etc., and have launched GPU products with independent intellectual property rights.
GPU technology has a bright future.
GPU technology is in a stage of rapid development, and its application fields are constantly expanding. In the future, GPUs will continue to play an important role in computer graphics, deep learning, scientific computing, and other fields, and are expected to make breakthroughs in emerging fields such as autonomous driving and intelligent robots.
What are your thoughts on this? -
Welcome to leave a message** and share in the comment area. -