Enterprise

Supermicro Announced New HGX Liquid Cooled Servers

Supermicro has announced new server offerings that incorporate NVIDIA’s latest HGX H200 GPUs (with H200 Tensor Core GPUs), which are set to provide significant advancements in the field of generative AI and LLM training.

Supermicro has announced new server offerings that incorporate NVIDIA’s latest HGX H200 GPUs (with H200 Tensor Core GPUs), which are set to provide significant advancements in the field of generative AI and LLM training.

The company is preparing to release AI platforms that include both 8U and 4U Universal GPU Systems, fully equipped to support the HGX H200 with 8-GPU and 4-GPU configurations. These systems boast an enhanced memory bandwidth with the introduction of HBM3e technology, which presents nearly double the capacity and 1.4 times higher bandwidth compared to previous generations. This leap in hardware capability is expected to cater to the growing demands for more intricate computational tasks in AI research and development.

In addition, Supermicro announced a high-density server with NVIDIA HGX H100 8-GPU systems in a liquid-cooled 4U system, which incorporates the company’s newest cooling technologies. Claiming it as the industry’s most compact high-performance GPU server, Supermicro indicates that this system allows for the highest density of AI training capacity in a single rack unit to date and will help ensure cost and energy efficiency.

Partnership with NVIDIA has allowed Supermicro to be at the front of AI system design, providing optimized solutions for AI training and HPC workloads. The company’s commitment to rapid innovation is evident in their system architecture which allows for quick market deployment of technological advances. The new AI systems feature NVIDIA’s interconnect technology, like NVLink and NVSwitch, to support extreme high-speed data transfers at 900GB/s, and offer up to 1.1TB of HBM3e memory per node, optimizing performance for parallel processing of AI algorithms.

Supermicro offers a diverse range of AI servers, including the widely-used 8U and 4U Universal GPU systems. These systems featuring four-way and eight-way NVIDIA HGX H100 GPUs are now drop-in ready for the new H200 GPUs (which feature 41GB of memory with a bandwidth of 4.8TB/s), allowing for even faster training time of larger language models.

Supermicro will be showcasing the 4U Universal GPU System at the upcoming SC23.

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed

Lyle Smith

Lyle is a staff writer for StorageReview, covering a broad set of end user and enterprise IT topics.

Recent Posts

Ampere Unveils Breakthrough CPU Promising 40% Performance Boost Over Competition

Ampere Computing has unveiled its annual update, showcasing upcoming products and milestones that underscore its ongoing innovation in sustainable, ARM-based…

3 days ago

IGEL Disrupt 2024 Provides A View To Future Direction

IGEL Disrupt 2024 was held from April 29th to May 1st at the Diplomat Hotel in Hollywood, Florida, and we…

3 days ago

ZutaCore Waterless Cooling for NVIDIA’s Grace Blackwell Superchip Unveiled

ZutaCore has unveiled a waterless, direct-to-chip liquid cooling system specifically designed for NVIDIA's GB200 Grace Blackwell Superchip. At next week’s…

5 days ago

HPE Simplifies Workload Management With New HPE GreenLake Cloud Solutions

Hewlett Packard Enterprise (HPE) has introduced new solutions within the HPE GreenLake cloud platform that aim to simplify enterprise storage,…

5 days ago

Veeam Now Supports Proxmox Virtual Environment

Veeam Software has announced the upcoming introduction of Proxmox Virtual Environment (VE) support, responding to strong demand from its SMB…

5 days ago

IBM Power S1012 Extends AI Workloads to the Edge

The IBM Power S1012 is the portfolio's edge-level server. It is a one-socket, half-wide, Power10 processor-based system for edge computing…

5 days ago