Meta Platforms has unveiled details about its latest custom artificial intelligence accelerator chip.
Meta’s plans to launch a new iteration of a proprietary data center chip to tackle the increasing computational demands for running AI applications across Facebook, Instagram, and WhatsApp were reported earlier this year. Internally dubbed as “Artemis,” this chip is aimed at reducing Meta’s dependence on Nvidia’s AI chips and cutting down overall energy costs.
In a blog post, the company revealed that this chip’s design is primarily focused on achieving optimal balance in computing power, memory bandwidth, and memory capacity to cater to ranking and recommendation models.
The newly introduced chip is called Meta Training and Inference Accelerator (MTIA). It forms part of Meta’s extensive custom silicon initiative, which includes exploration into other hardware systems. Alongside chip development, Meta has heavily invested in software development to efficiently utilize its infrastructure’s power.
Additionally, the company is investing billions in procuring Nvidia and other AI chips, with CEO Mark Zuckerberg announcing plans to acquire approximately 350,000 flagship H100 chips from Nvidia this year. When combined with chips from other suppliers, Meta aims to accumulate the equivalent of 600,000 H100 chips by the end of the year.
The chip will be manufactured by Taiwan Semiconductor Manufacturing Co using its 5nm process. Meta claims it offers three times the performance of its predecessor.
The chip has already been deployed in data centers and is already serving AI applications.
Written by Alius Noreika
Read more:
What Are 2D Materials, and Why Do They Interest Scientists?
source link eu news