2024 Visible Future Top 10 Development Trends in the Data Center Industry

Mondo Finance Updated on 2024-02-01

In 2023, we are witnessing the explosion of artificial intelligence (AI), which is changing the way people work, live, and interact with technology. Generative AI, represented by ChatGPT, has also attracted a lot of attention in the last year due to its significant progress and wide application. As AI continues to evolve and mature, it has the potential to revolutionize a wide range of industries, from healthcare, finance, and manufacturing to transportation, entertainment, and more. The huge demand for artificial intelligence is driving the development of new chip and server technologies, and these changes will bring disruptive challenges to data center construction, power demand, water consumption, power supply, distribution, and cooling technologies and architectures. How to deal with these challenges will become a topic of concern in the industry in the new year.

As a global leader in infrastructure construction and digital services for data centers and key application areas of the industry, Schneider Electric has released a series of insights at the beginning of the year for the seventh consecutive year since 2018, creating a precedent for forward-looking interpretation of industry trends, and continues to lead the direction of future change, injecting strong development momentum into the data center industry. Based on deep industry insights and practices, Schneider Electric is committed to revealing what changes will happen in the data center industry in the new year, the value and significance of these changes and trends for data center operators, and the perception and value proposition of these industry changes. Here's what Schneider Electric's Global Data Center Research Center has to say about 2024 trends.

Trend 1: Intelligent computing centers will lead the construction of data centers

Over the past decade, cloud computing has been the main driving force for the construction and development of data centers, with the aim of providing society with the general-purpose computing power needed for digital transformation. However, the explosion of AI has brought a huge demand for computing power, and in order to meet the training and application inference of large AI models, we need to build a large number of intelligent computing centers. Schneider Electric estimates the current power demand of global intelligent computing centers to be 45 GW, which is 8% of the total 57 GW of data centers, and ** it will grow at a CAGR of 26%-36% until 2028, eventually reaching 140 GW to 187 GW, 15%-20% of the total 93 GW. This growth rate is 2 to 3 times the compound annual growth rate of traditional data centers (4%-10%). The distribution of computing power will also be deployed by the current centralized deployment (centralized vs. . centralized95%:5%) to the edge (50%:50%), which means that intelligent computing centers will lead the trend of data center construction. According to the plan of the Ministry of Industry and Information Technology, the proportion of intelligent computing power in our country will reach 35% by 2025, with an average annual compound growth rate of more than 30%. Schneider Electric believes that compared with traditional data centers, the construction of intelligent computing centers needs to be sustainable and more forward-looking on the premise of ensuring high energy efficiency and high availability, that is, minimizing the impact on the environment, and especially improving adaptability to meet the needs of future IT technologies (high-power chips and servers).

Trend 2: AI will drive a sharp increase in cabinet power density

Cabinet power density has a great impact on the design and cost of data centers, including power supply and distribution, cooling, and the layout of IT rooms, which has always been one of the design parameters that data centers pay more attention to. Uptime's research results over the past few years show that the power density of server cabinets is steadily but slowly climbing. The average power density of cabinets is typically less than 6 kilowatts, and most operators do not have cabinets above 20 kilowatts. Reasons for this trend include Moore's Law, which keeps the thermal design power of chips at a relatively low level (150 watts), and the fact that high-density servers are often dispersed across different cabinets to reduce infrastructure requirements. But the explosion of AI will change this trend, and Schneider Electric research has found that AI cabinets used for training can have power densities as high as 30-100 kilowatts (depending on the type of chip and server configuration). There are many reasons for this high density, including the rapid increase in CPU GPU thermal design power consumption, CPU is 200-400 watts, GPU is 400-700 watts, and will be further increased in the future; The power consumption of AI servers is usually around 10 kilowatts, and since GPUs work in parallel, AI servers need to be deployed compactly in clusters to reduce network latency between chips and storage. The steep increase in cabinet power density will pose a significant challenge to the design of the physical infrastructure of the data center.

Trend 3: Data centers are transitioning from air-cooled to liquid-cooled

Air cooling has always been the mainstream way to cool IT rooms in data centers, and if properly designed, it can support more than a dozen kilowatts or even higher cabinet power densities. However, with the continuous pursuit of AI training performance, developers continue to increase the thermal design power consumption of chips, and air cooling of these chips has become impractical. Although some server vendors continue to push the limits of air-cooling technology by redesigning the heat sink of the chip, increasing the server airflow and the temperature difference between the inlet and outlet air, and configure 40-50 kW air-cooled AI cabinets, this will increase the power consumption of the fan exponentially. For example, an AI server fan can consume up to 25% of server power, but the typical value for a traditional server is only 8%. Schneider Electric believes that chip cooling is the main driving force for liquid cooling, and the power density of 20 kW cabinets is a relatively reasonable dividing line between air cooling and liquid cooling. When the power density of the AI cabinet exceeds this value, liquid-cooled servers should be considered. Liquid cooling also offers a number of benefits over air cooling, including improved processor reliability and performance, improved energy efficiency, reduced water usage, and reduced noise levels. At present, for high-density AI servers, ** vendors usually provide two solutions: air cooling and liquid cooling, but for next-generation GPUs, liquid cooling will be the only choice.

Trend 4: The safety and reliability of power distribution is more important in the intelligent computing center

For traditional data centers, the probability of different workloads peaking at the same time is extremely low. For example, a typical large data center has a peak-to-average ratio of 15-2.0 or higher. However, in the intelligent computing center, due to the lack of variation in the AI training load (the peak-to-average ratio is close to 1.).0), workloads can run for hours, days, or even weeks at peak power. The result is an increased likelihood of tripping of large upstream circuit breakers, as well as the risk of downtime. At the same time, due to the increase of the power density of the cabinet, it is necessary to use circuit breakers, column head cabinets, small busbars, etc. with higher rated current value. While the resistance is smaller, the fault current that can pass through is also larger, which means that the risk of arcing in the IT room will also increase, and ensuring the safety of workers in this area is a problem that must be solved. Schneider Electric recommends using simulation software during the design phase to perform an arc flash risk assessment of the power system, analyze the fault currents that can be generated, and analyze reliability in order to design the best solution for a specific site. At the same time, it is recommended that if the AI training workload of the new data center IT room exceeds 60-70%, the size of the main circuit breaker needs to be determined based on the sum of the downstream feeder circuit breakers, and the simultaneity factor is no longer considered in the design.

Trend 5: Standardization will become the key to liquid-cooled propulsion

Cold plate liquid cooling and immersion liquid cooling are the two mainstream methods of liquid cooling in data centers. Which liquid cooling method to choose and how to achieve rapid deployment has always been a hot topic in the industry. As more and more AI servers adopt cold-plate liquid cooling, cold-plate liquid cooling is also more compatible with traditional air-cooled systems, which is favored by many data center operators. However, server manufacturers have a variety of design methods for liquid cooling, and there are many problems with the compatibility of quick connectors, blind mating, and Manifold, and the responsibility boundary between IT and infrastructure is also blurred, which greatly limits the acceptance and promotion of liquid cooling in data centers. Compared with cold plate liquid cooling, immersion liquid cooling using fluorocarbon fluids is not only relatively high, but also many fluorocarbons are synthetic chemicals that are harmful to the environment, facing more and more industry regulatory and policy pressures. As a result, immersion liquid cooling will have fewer and fewer fluorocarbon fluids available in addition to oil-based coolants. Schneider Electric recommends that IT manufacturers provide more standardized design solutions, including fluid temperature, pressure, flow rate, equipment interfaces, etc., and provide clearer responsibility boundaries. Schneider Electric will release liquid cooling*** in the first quarter to help data centers better deploy liquid cooling technology.

Trend 6: Data centers will pay more attention to WUE

Water scarcity is becoming a serious problem in many regions, and it is becoming increasingly important to understand and reduce water consumption in data centers. Previously, a major reason why data center water consumption was not taken seriously was that the cost of water was often negligible compared to electricity consumption, and many data centers even improved energy efficiency by consuming more water. However, data center water use has attracted a lot of local attention, especially in water-scarce areas, where policies are being introduced to limit and optimize data center water use. This includes the use of WUE as the design index of the data center and the adoption of a dual water and electricity control policy. As a result, reducing water consumption will be a key area of focus for many data center operators in the future. Schneider Electric, through a study of water consumption in the data center industry, believes that the WUE value of data centers is 03-0.45 l kWh is a relatively good value. Schneider Electric recommends finding a balance between electricity and water use based on the water situation, climate, and type of data center where the data center is located. The industry can adopt various technological innovations such as adiabatic evaporation, indirect evaporative cooling, liquid cooling, etc., to reduce direct water consumption. Data center operators should report on water consumption as part of the Sustainable Development Goals (SDGs) and focus on indirect water use from electricity consumption.

Trend 7: Improving power distribution capacity will become a new demand of the intelligent computing center

In the intelligent computing center, with the improvement of cabinet power density and the cluster deployment of AI cabinets, the power distribution of IT equipment rooms faces the challenge of small rated capacity. For example, in the past, a 300 kW power distribution module could support dozens or even hundreds of cabinets. Today, the same power distribution module can't even support a minimum-spec NVIDIA DGX SuperPod AI cluster (10 cabinets of 358 kW per row, 36 kW each). The size of the power distribution module is too small, and using multiple power distribution modules not only wastes IT space, but also becomes impractical. Multiple distribution modules also increase costs compared to a single high-capacity distribution module. Returning to the essence of power distribution, the main means to increase the distribution capacity is to increase the current. Schneider Electric recommends that power distribution modules with high enough specifications should be selected at the time of design to achieve flexible deployment to accommodate future power distribution needs, as long as they support at least one entire row of clusters. For example, at rated voltage, the 800 A distribution module is currently the standard capacity size for all three distribution types (PDU, RPP, and busbar) and is available at 576 kW (461 kW downgraded). For end-of-line power distribution, small busbars can be used, thus avoiding the need to customize cabinet PDUs with a current rating greater than 63 A. Space permitting, multiple standardized cabinet PDUs can be used as transitions.

Trend Eight:aiIt will enable the energy-saving transformation of data centers

By providing AI computing power, data centers are driving the evolution of human society in more sustainable directions, such as automation, digitalization, and electrification, enabling transportation, manufacturing, and power generation to reduce environmental impact. In turn, AI can enable data centers to optimize their energy to reduce their own environmental impact. For example, AI and machine learning technologies can be used to control the cooling source system and air conditioning terminals in data centers, and through the analysis of historical data, the airflow distribution of data centers can be monitored in real time, and the appropriate cooling output can be matched in real time based on changes in the IT load of the data center. By automatically adjusting the operation mode of the precision air conditioner and fan at the end, it can achieve dynamic on-demand cooling, so as to reduce hot spots and reduce the energy consumption and operation and maintenance costs of the equipment room. Schneider Electric believes that the application of AI technology in the group control system of air conditioning in the computer room can realize the intelligent monitoring and control of the internal environmental parameters of the computer room, and improve energy efficiency and system reliability through automatic adjustment and optimization, so as to achieve the purpose of energy conservation and emission reduction. With the continuous popularization of AI technology and the continuous national requirements for energy conservation and consumption reduction in data centers, AI technology will receive more attention and application in data center air conditioning group control systems, whether it is a new construction or renovation project.

Trend 9: The footprint of the power distribution system will attract attention

In data center design, it has always been one of the main demands of data center design to maximize the proportion of IT equipment area, that is, to reduce the floor space of auxiliary equipment as much as possible. For traditional data centers, the ratio of the area of the IT room to the area of the power distribution room is usually 1Around 5:1. With the high density of AI-driven IT cabinets, more and more IT equipment rooms adopt the liquid cooling method, and the ratio of the area of the liquid-cooled IT equipment room to the area of the power distribution room will be reversed to 0Around 6:1. At this time, the floor space of the power distribution room will attract more attention from data center designers, and optimizing the floor space of the power distribution room will also become a development direction of the industry. Schneider Electric believes that increasing the power supply capacity of power distribution and power equipment in a smaller footprint is one of the most effective ways. For example, reducing the footprint of the UPS system, including the modular UPS with higher power modules to achieve megawatt-level single-cabinet power; At the same time, the use of lithium batteries instead of lead-acid batteries can reduce the floor space between batteries by 40-60%. Centralizing the deployment of power supply and distribution equipment (e.g., power skids) can also reduce the footprint of the distribution room; The use of compact modular distribution cabinets and emergency power sources such as pooled diesel generators is also an effective means.

Trend 10: The value of energy storage systems in data centers is becoming increasingly prominent

UPS systems have been playing an important role in achieving power quality management and uninterrupted power supply in data centers. As data center operators are under pressure to improve sustainability and financial performance while maintaining or enhancing the reliability and resiliency of power supply and distribution systems, new energy storage and generation technologies offer new possibilities, but also challenge traditional data center operating models and electrical architectures. Distributed energy technologies, such as batteries and fuel cells, are able to efficiently generate or store clean energy. In addition to providing the functions of the traditional UPS system, the energy storage system can also manage the peak power demand by releasing the stored energy during the peak power consumption to achieve peak load increase. Reduce the power cost of the data center through peak shaving and valley filling, and achieve energy cost optimization; At the same time, it participates in the demand response of the power grid to achieve revenue generation. Schneider Electric believes that the need to reduce energy costs, make the most of stranded assets, reduce reliance on diesel generators, and maintain grid-independent business resilience for data centers to achieve sustainable data centers creates more effective use cases and value for the adoption of energy storage systems in data centers. With the declining lithium battery energy storage system** and the innovation of electrical architecture, data centers can provide greater control and autonomy over energy** through microgrid systems; In the absence of microgrids, it is also possible to gain a competitive advantage by deploying energy storage systems.

Entering 2024, the focus of the data center industry will shift from the construction of traditional data centers to the construction of intelligent computing centers, and the key is to achieve the sustainable development of intelligent computing centers and adapt to the next generation of IT technologies through continuous technological innovation.

The above response to emerging trends comes from Schneider Electric's Global Data Center Research Center, which was established in the nineties of the last century. The research center has always taken "exploring the technology and development trends of the data center industry and advocating best practices" as its team mission, and helps data center users improve availability and optimize energy efficiency by publishing easy-to-understand tools and trade-off tools, empowering the sustainable development of data centers and maximizing the business value of data centers. As of 2023, the Schneider Electric Research Center team has published more than 230 articles, with more than 400,000 articles per year; There are 30 trade-off tools, which are used by more than 20,000 users** every year. All of the best and trade-off tools are freely available to the entire industry to learn and use, and they are a testament to Schneider Electric's position as a thought leader in the data center industry while advancing the data center industry.

Related Pages