32.1 C
    Sunday, July 21, 2024

      Meta Unveiled Next-Generation AI Chipset to Build Large-Scale AI Infrastructure | Details Inside

      Meta Unveil its next-generation Meta Training and Inference Accelerator (MTIA), its family of custom-made chipsets for artificial intelligence (AI) workloads. This upgrade to its AI chipset comes almost a year after the company introduce the first AI chips.

      These Inference Accelerators will power the Meta’s existing and future products, services, and the AI that lies within its social media platforms.

      Meta highlight that the capabilities of the chipset will be use to serve its ranking and recommendation models.

      Making the announcement via its blog post, Meta said :

      “The next generation of Meta's large-scale infrastructure is being built with AI in mind, including supporting new generative AI (GenAI) products and services, recommendation systems, and advanced AI research. It's an investment we expect will grow in the years ahead as the compute requirements to support AI models increase alongside the models' sophistication.”

      As per Meta, the new AI chip offers significant improvements in both power generation and efficiency due to improvements in its architecture.

      The next generation of MTIA doubles the compute and memory bandwidth compare to its predecessor.

      It can also serve Meta’s recommendation models that it uses to personalise content for its users on its social media platforms.

      Meta also said that the system has a rack-based design that holds up to 72 accelerators where three chassis contain 12 boards and each of them houses two accelerators.

      ALSO READ  Instagram to Shut Down IGTV App

      The processor clocks at 1.35GHz which is much faster than its predecessor at 800MHz.

      It can also run at a higher output of 90W.

      The fabric between the accelerators and the host has also upgrade to PCIe Gen5.

      The software stack is where the company has made major improvements.

      The chipset is design to be fully integrate with PyTorch 2.0 and related features.

      Meta explain :

      “The lower level compiler for MTIA takes the outputs from the frontend and produces highly efficient and device-specific code,”.

      The results so far show that this MTIA chip can handle both the low complexity (LC) and high complexity (HC) ranking and recommendation models that are components of Meta’s products.

      Across all these models, there can be a ~10x-100x difference in model size and the amount of compute per input sample.

      Because we control the whole stack, we can achieve greater efficiency compare to commercially available GPUs.

      Realizing these gains is an ongoing effort and we continue to improve performance per watt as we build up and deploy MTIA chips in our systems.

      With the rise of AI, many technology companies are now focusing on manufacturing customise AI chipsets that can cater to their particular needs.

      These processors offer massive compute power over servers which enables them to bring products such as generalist AI chatbots and AI tools for certain tasks.

      Related Articles


      Please enter your comment!
      Please enter your name here

      Stay Connected

      - Advertisement -

      Latest Articles