Home Enterprise NVIDIA MGX Server Specification For System Manufacturers Unveiled

NVIDIA MGX Server Specification For System Manufacturers Unveiled

by Harold Fritts

NVIDIA’s Computex opening keynote introduced the NVIDIA MGX server specification, a modular reference architecture for system manufacturers enabling them to quickly and cost-effectively build over 100 server variations to fit a broad range of AI, high-performance computing, and Omniverse applications.

NVIDIA’s Computex opening keynote introduced the NVIDIA MGX server specification, a modular reference architecture for system manufacturers enabling them to quickly and cost-effectively build over 100 server variations to fit a broad range of AI, high-performance computing, and Omniverse applications.

NVIDIA MGX

Manufacturers such as ASRock Rack, ASUS, GIGABYTE, Pegatron, QCT, and Supermicro will adopt MGX to help cut development costs by up to 75 percent and reduce development time by two-thirds to just six months. MGX provides a basic system architecture optimized for accelerated computing for their server chassis and lets them pick their GPU, DPU, and CPU. Design variations address unique workloads, like HPC, data science, large language models, edge computing, graphics and video, enterprise AI, and design and simulation. MGX can easily be integrated into cloud and enterprise data centers.

Supermicro and QCT will be first to market, with MGX designs appearing in August. A new announcement from Supermirco introduced the ARS-221GL-NR system that includes NVIDIA Grace CPU Superchip. Separately, a new announcement by QCT introduced the S74G-2U system that will use the NVIDIA GH200 Grace Hopper Superchip. Softbank Corp. also plans to roll out multiple hyperscale data centers across Japan using MGX to dynamically allocate GPU resources between generative AI and 5G applications.

Flexibility In Design

Data centers are under pressure to meet requirements for growing compute capabilities and reducing carbon emissions to address climate change while trying to reduce costs. The modular design of MGX provides system manufacturers with the tools to meet customer’s unique budget, power delivery, thermal design, and mechanical requirements.

MGX works with different form factors and is compatible with current and future generations of NVIDIA hardware, including:

  • Chassis: 1U, 2U, 4U (air or liquid cooled)
  • GPUs: Full NVIDIA GPU portfolio including the latest H100, L40, L4
  • CPUs: NVIDIA Grace CPU Superchip, GH200 Grace Hopper Superchip, x86 CPUs
  • Networking: NVIDIA BlueField®-3 DPU, ConnectX-7 network adapters

MGX was designed to offer flexible, multi-generational compatibility with NVIDIA products ensuring system builders can reuse existing designs and easily adopt next-generation products. This differs from the NVIDIA HGX in that it is based on an NVLink-connected multi-GPU baseboard tailored to scale to create AI and HPC systems.

In addition to hardware, MGX is supported by NVIDIA’s full software stack, which enables developers and enterprises to build and accelerate AI, HPC, and other applications. This includes NVIDIA AI Enterprise, the software layer of the NVIDIA AI platform, which features over 100 frameworks, pre-trained models, and development tools to accelerate AI and data science for fully supported enterprise AI development and deployment.

NVIDIA MGX is compatible with the Open Compute Project and Electronic Industries Alliance server racks for quick integration into enterprise and cloud data centers.

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | TikTok | RSS Feed