The NVIDIA H100 AI chip has strong performance, but its energy consumption is not low, and its power

Mondo Digital Updated on 2024-01-31

NVIDIA is the world's leading manufacturer of graphics processing units (GPUs) and artificial intelligence (AI) chips, which are widely used in gaming, data centers, high-performance computing, machine learning, and other fields. Recently, NVIDIA released the H100 AI chip based on the new Hopper architecture at its 2022 GTC conference, which uses TSMC's most advanced 4nm process, integrates 80 billion transistors, and has 18,432 CUDA cores and 576 Tensor cores, which is currently the most powerful AI accelerator in the world.

However, the powerful performance of the H100 AI chip also comes with huge energy consumption. According to official data from NVIDIA, the peak power consumption of the H100 AI chip is as high as 700 watts, which exceeds the average power consumption of the average American home. [2] [2] At an annual utilization rate of 61%, each H100 AI chip will consume approximately 3,740 kilowatt-hours (kWh) of electricity per year. [ 3 ] [3] By the end of 2024, 3.5 million H100 AI chips will be deployed, with a total annual power consumption of 13,09182 gigawatt-hours (GWh). Countries such as Lithuania or Guatemala consume only about 13,092 GWh of electricity per year.

This means that if all the H100 AI chips were put together, their annual power consumption would exceed that of some small European countries, or even close to the level of a large American city. This has a significant impact on the environment and energy. On the one hand, the high energy consumption of H100 AI chips will increase greenhouse gas emissions and exacerbate the problem of global warming. On the other hand, the high energy consumption of the H100 AI chip will also increase the pressure on electricity**, which may lead to electricity prices** or the risk of power shortages.

So, why did Nvidia launch such an AI chip with an astonishing energy consumption?Nvidia CEO Jensen Huang said at the press conference that the H100 AI chip is designed to meet the needs of next-generation AI data centers, driving the development of large-scale AI language models, deep recommendation systems, genomics and complex digital twins. He believes that the performance and efficiency of the H100 AI chip far exceed that of other competitors, and can provide customers with better AI services and experiences.

In fact, the H100 AI chip has indeed set a number of records in terms of AI running scores, showing its superiority in AI training and inference. For example, the H100 AI chip is able to speed up the training of a mixture of experts model by up to 9x, reducing the training time from weeks to daysThe H100 AI chip is capable of speeding up chatbot inference by up to 30 times while meeting the sub-second latency of instant conversational AI.

According to Nvidia, the energy consumption problem of the H100 AI chip can be alleviated by optimizing the synergy of software and hardware, as well as using renewable energy and energy-saving measures. [ 3 ][3] Nvidia also said that the energy consumption of H100 AI chips is directly proportional to the value of AI it brings, and that H100 AI chips can help solve some of the major challenges facing humanity, such as medical diagnosis, drug discovery, climate change, intelligent transportation, etc., thereby bringing huge benefits to society and the economy.

The release of the H100 AI chip undoubtedly demonstrates NVIDIA's technical strength and ambition in the field of AI, and also arouses the attention and discussion of the industry and the public about AI energy consumption. Whether the H100 AI chip can balance performance and energy consumption, and whether it can bring more benefits to the development of AI rather than negative impacts, remains to be further observed and evaluated.

Related Pages