According to media reports,MetaSecond generation self-developedAI ChipsArtemisIt will be officially put into production this year.
It is understood that the new chip will be used for inference tasks in data centers and work together with GPUs from suppliers such as Nvidia.
A Meta spokesperson previously said: We believe that our self-developed accelerators will complement the GPUs on the market and provide the best balance of performance and efficiency for Meta's tasks.
In addition to running the recommendation model more efficiently, Meta also needs to provide computing power for its own generative AI application and Llama3, the open source competitor to GPT-4 that is being trained.
Meta CEO Zuckerberg previously announced plans to deploy 350,000 Nvidia H100 GPUs by the end of this year, giving Meta a total of approximately 600,000 GPUs for running and training AI systems.
In addition to Meta, OpenAI and Microsoft are also trying to build their own proprietary AI chips and more efficient models to break the spiraling costs.