Meta, the social media giant formerly known as Facebook, is a street-proof leader in the more and more popular trend of AI (Artificial Intelligence). To be able to maintain and even improve its solutions in order to satisfy users, Meta has announced several new projects which involve using advanced hardware and software. The newly announced projects would allow AI models to be trained and inferred faster and more efficiently.
One of Meta’s new anouncements was the design of a new AI data center, tailored for both training and inference. This new center will use MTIA (Meta Training and Inference Accelerator) chips, Meta’s unique silicon product, which will make different AI workloads (such as Computer Vision, Natural Language Processing and Recommendation Systems) faster and simpler.
Additionally, the company already created the Research SuperCluster (RSC), a powerful AI supercomputer with 16,000 GPUs in order to help train huge language models, like the LLaMA project recently launched.
Meta’s CEO Mark Zuckerberg expressed his thoughts in a statement, pointing out that the organization has been promoting technology advances which supported its usage by the company.
The idea that AI inference can go beyond the limitations of traditional CPUs is not new. Rivals like Microsoft and Nvidia, IBM and Google already have technologies in place which involve GPUs and custom designed Infrastructure Processing Units. Meta is also breaking new grounds, using its own custom silicon – MTIA chip, designed specifically to serve enterprise AI needs.
Meta’s efforts to fine tune their AI infrastructure are necessary for the success of the company. A new thermodynamic system, leveraging liquid cooling was developed to help preserve energy when running AI. The end goal is to create the most efficient environment which would allow AI development to thrive and offer users the best possible experience.