Meta The company unveiled two new data center clusters via an official press release on Local 12, which the company is looking toNvidiaof GPUthat stand out in AI-focused development.
The sole purpose of the two data centers is said to be AI research and the development of large language models in consumer-specific application areas (IT House note: including sound or image recognition).Each cluster contains 24,576 NVIDIA H100 AI GPUs, which will be used for their own big language models Llama 3 enhancement.
Both new data center clusters feature 400Gbps interconnections, with one cluster using Meta's proprietary fabric solution based on the Arista 7800 and the other using NVIDIA's Quantum2 InfiniBand fabric to ensure a seamless interconnect experience.
In addition, the cluster is based on Meta's own open GPU Grand Teton AI platform, which leverages the capabilities of modern gas pedals by increasing host-to-GPU bandwidth and compute power.
Meta officials say that the efficiency of the high-performance network fabric and key storage decisions of these clusters, combined with the H100 GPUs in each cluster, can support larger, more complex models, paving the way for advances in general-purpose AI product development and AI research.
Meta CEO Zuckerberg announced that the company is building a massive infrastructure. "The prediction is that by the end of this year, we will have about 350,000 NVIDIA H100 accelerator cards, which is the computing power equivalent of 600,000 H100s if you count other GPUs."