On December 14, AMD launched its strongest AI chip, Instinct Mi300X, at the beginning of this month, and the AI performance of its 8-GPU server is 60% higher than that of Nvidia H100 8-GPU. In this regard, NVIDIA recently released a set of latest H100 vs Mi300X performance comparison data, showing how H100 can use the right software to deliver faster AI performance than Mi300X.
According to AMD's previously released data, the performance of the MI300X's FP8 FP16 has reached 1 of Nvidia's H1003x faster than the H100 to run both the Llama 2 70B and FlashAttention 2 models. In an 8v8 server, running the Llama 2 70B model, the Mi300X is 40% faster than the H100;Running the Bloom 176B model, the MI300X is 60% faster than the H100.
However, it should be pointed out that AMD used the latest ROCM 300 when comparing the Mi300X with the NVIDIA H100The numbers were obtained from the optimized libraries in the 0 suite, which support the latest calculation formats such as fp16, bf16, and fp8, including sparsity, etc. In contrast, the NVIDIA H100 was not tested without the use of optimization software such as NVIDIA's Tensorrt-LLM.
AMD's implicit statement for Nvidia H100 testing shows that VLLM V. was used02.2.2 Inference software and NVIDIA DGX H100 system, the input sequence length of the LLAMA 2 70B query is 2048, and the output sequence length is 128.
And Nvidia's latest announcement of the DGX H100 (with 8 NVIDIA H100 Tensor Core GPUs with 80 GB HBM3) test with the publicly available NVIDIA TensorRT LLM software, v05.0 for batch-1, v06.1 Used for delay threshold measurements. The workload details and footnotes are the same as AMD's previous tests.
The results show that the NVIDIA DGX H100 server is more than 2x faster than the AMD Mi300X 8-GPU server performance when optimized software, and is 47% faster than the AMD MI300X 8-GPU server.
The DGX H100 can be used at 1Process a single inference task in less than 7 seconds. To optimize response time and data center throughput, cloud services set fixed response times for specific services. This allows them to combine multiple inference requests into larger "batches" and increase the overall number of inferences per second for the server. Industry-standard benchmarks such as MLPERF also use this fixed response time metric to measure performance.
Small trade-offs in response time can lead to uncertainty in the number of inference requests that the server can handle in real time. Use a fixed 2With a 5-second response time budget, the NVIDIA DGX H100 server can handle more than 5 LLAMA 2 70B inferences per second, compared to less than one per second for Batch-1.
Obviously, it's relatively fair that Nvidia uses these new benchmarks, after all, AMD also uses its optimized software to evaluate the performance of its GPUs, so why not do the same when testing the Nvidia H100?
It is important to know that NVIDIA's software stack revolves around the CUDA ecosystem, which has a very strong position in the AI market after years of hard work and development, while AMD's ROCM 60 is new and has not been tested in a real-world scenario.
According to information previously revealed by AMD, it has struck a large portion of deals with major companies such as Microsoft and Meta, which see its Mi300X GPUs as an alternative to Nvidia's H100 solution.
AMD's latest Instinct Mi300X is expected to ship in large quantities in the first half of 2024, however, Nvidia's more powerful H200 GPUs will also ship by then, and Nvidia will also launch a new generation of Blackwell B100 in the second half of 2024. In addition, Intel will also launch its new generation of AI chip, Gaudi 3. Next, it seems that the competition in the field of artificial intelligence will become more intense.
Editor: Xinzhixun-Rogue Sword.