Second only to Meta, Musk revealed the number of Nvidia H100 chips stockpiled by Tesla

Elon Musk's Tesla and his mysterious AI-focused company xAI have a large stockpile ofNvidia H100 SerieschipTesla intends to use this to overcome the ultimate challenge of autonomous driving - Level 5 autonomous driving, while xAI is responsible for realizing Musk's vision of "ultimate truth artificial intelligence".

Second only to Meta, Musk revealed the number of Nvidia H100 chips stockpiled by Tesla

X platform user "The Technology Brother" recently posted a message saying,Meta The company has already stockpiled the world’s largest number of H100 GPUs, a staggering 350,000. However, Musk expressed dissatisfaction with the ranking of Tesla and xAI being marginalized (10,000), noting that “if the calculation is correct, Tesla should be second and xAI would be third.”

Second only to Meta, Musk revealed the number of Nvidia H100 chips stockpiled by Tesla

This statement means that Tesla may currently have 30,000 to 350,000 H100 GPUs, while xAI has about 26,000 to 30,000In January this year, Musk confirmed that an additional $500 million (equivalent to about 10,000 H100 GPUs) would be invested in Tesla's Dojo supercomputer, while emphasizing that Tesla "will invest more in Nvidia hardware this year" because "to remain competitive in the field of artificial intelligence, current investment requires at least billions of dollars per year."

xAI is also actively stockpiling computing power. In 2023, in order to form xAI, Musk recruited top talents in the field of artificial intelligence from DeepMind, OpenAI, Google Research, Microsoft Research, Tesla and the University of Toronto. At that time, xAI was reported to have purchased about 10,000 Nvidia GPUs, but at that time it should have referred to the A100 series. Judging from Musk's recent statement, xAI is currently hoarding a considerable number of H100 GPUs.

However, the rapidly iterating field of artificial intelligence has put these high-end chips at risk of being quickly eliminated. In March this year, Nvidia released the GB200 Grace Blackwell super chip, which combines the ARM-based Grace CPU and two Blackwell B100 GPUs. The system is capable of running artificial intelligence models with 27 trillion parameters, and the processing speed for tasks such as chatbot question answering will be increased by 30 times.

statement:The content is collected from various media platforms such as public websites. If the included content infringes on your rights, please contact us by email and we will deal with it as soon as possible.
Information

Is the Sora of the music industry coming? AI music app xgboost is said to be 10 times better than Suno!

2024-4-8 11:05:47

Information

Alibaba Tongyi Qianwen open-sources 32 billion parameter models and has achieved full open-source of 7 major language models

2024-4-9 9:36:33

Search