Nvidia H100 AI GPU shortage eases, delivery time drops from 3-4 months to 2-3 months

once Upon a time,NvidiaH100 for AI computing GPU Supply exceeds demand. However, according to Digitimes, Terence Liao, general manager of TSMC Taiwan, said that the delivery waiting time for Nvidia H100 has been greatly shortened in the past few months, from the initial 3-4 months to the current 2-3 months (8-12 weeks). Server foundry manufacturers also revealed that compared to the situation in 2023 when Nvidia H100 was almost impossible to buy, the current supply bottleneck is gradually easing.

Nvidia H100 AI GPU shortage eases, delivery time drops from 3-4 months to 2-3 months

Despite the reduction in delivery wait times, Terence Liao said demand for AI hardware remains strong. Even at a higher price, purchases of servers for AI are replacing purchases of general-purpose servers. But he believes that long delivery cycles are the main reason demand appears to remain high.

The current 2-3 month delivery waiting time is the shortest in the history of Nvidia H100 GPUs.Just six months ago, the wait was as long as 11 months, and most Nvidia customers had to wait nearly a year to get the AI GPUs they ordered.

Since the beginning of 2024, the waiting time for H100 GPU delivery has been decreasing significantly. At the beginning of this year, the waiting time had already dropped from several months to 3-4 months. Now it has been further reduced by one month. At this rate, there may be no need to wait by the end of this year or even earlier.

Part of the reason for this change may be that some companies hold excess H100 GPU inventory and resell them to reduce the high maintenance costs of idle inventory. In addition, Amazon Web Services (AWS) makes it easier for users to rent Nvidia H100 GPUs through the cloud, which also helps alleviate some of the demand pressure for H100.

The only Nvidia customers that are still experiencing supply constraints are large companies, such as OpenAI, which are developing their ownLarge Language Models (LLM). Training large language models requires tens of thousands of GPUs to be done quickly and efficiently, so these companies still face supply bottleneck challenges.

statement:The content is collected from various media platforms such as public websites. If the included content infringes on your rights, please contact us by email and we will deal with it as soon as possible.
Information

OpenAI and Meta are about to release AI models with human-level reasoning capabilities, report says

2024-4-11 10:01:01

Information

U.S. House of Representatives proposes new bill: AI companies should disclose the use of copyrighted training data

2024-4-12 9:29:30

Search