StorageReview.com
NVIDIA HGX Rubin NVL8
Enterprise  ◇  Server

ASRock Rack Unveils Liquid-Cooled AI Systems Built Around NVIDIA Rubin and Blackwell at GTC 2026

ASRock Rack used NVIDIA GTC 2026 to introduce a broader range of liquid-cooled AI platforms for high-density enterprise and data center deployments. The announcement centered on new systems based on the NVIDIA HGX Rubin NVL8 platform, as well as NVIDIA MGX-based servers designed for the NVIDIA RTX PRO 4500 Blackwell Server Edition and liquid-cooled RTX

AI  ◇  Enterprise

IBM and NVIDIA Announce Expanded Partnership to Operationalize Enterprise AI

At GTC 2026, IBM and NVIDIA announced a significant expansion of their more than decade-long partnership, focusing on moving AI from pilot phases to full-scale production. The collaboration targets several critical bottlenecks in enterprise AI adoption, including GPU-native data analytics, intelligent document processing, and infrastructure for regulated environments. The joint effort aims to provide a

VDURA Global Namespace
AI  ◇  Enterprise  ◇  Enterprise Storage

VDURA Introduces RDMA and Context-Aware Tiering for AI Data Platforms at GTC 2026

During GTC 2026, VDURA showcased updates to its Data Platform that improve GPU utilization and storage efficiency in AI environments. The announcement includes the general availability of Remote Direct Memory Access (RDMA), a preview of its Context-Aware Tiering technology, and validated infrastructure setups based on AMD EPYC Turin CPUs and NVIDIA ConnectX-7 networking. The updates

AI  ◇  Enterprise

NVIDIA Groq 3 LPX: Everything we know

The LPU, or Language Processing Unit, is a custom AI inference accelerator designed and built by Groq, Inc. Founded in 2016 by Jonathan Ross, a former Google engineer credited as one of the original inventors of the TPU, Groq spent years developing a deterministic, software-defined processor architecture from the ground up. Unlike GPUs, which rely

HPE AI Grid image
Enterprise  ◇  Networking

HPE Introduces AI Grid to Connect AI Factories and Distributed Inference Clusters Using NVIDIA Reference Architecture

HPE has announced the HPE AI Grid, a comprehensive infrastructure solution aligned with the NVIDIA AI Grid reference architecture. It is designed to securely connect AI factories and distributed inference clusters across regional and remote edge locations. HPE positions this platform for service providers that need to deploy and manage thousands of distributed inference sites

HPE ProLiant Compute DL380a Gen12
AI  ◇  Enterprise

HPE Expands NVIDIA AI Computing Portfolio with Scalable Private Cloud AI and Blackwell GPU Integration

HPE has announced a significant expansion of the NVIDIA AI Computing by HPE portfolio, introducing integrated systems designed to scale enterprise AI deployments while maintaining security and governance. The update focuses on co-engineered, validated architectures intended to accelerate time-to-value for AI inferencing and model development. HPE CEO Antonio Neri and NVIDIA CEO Jensen Huang positioned

AI  ◇  Enterprise

NVIDIA DGX Rubin NVL8 Supports Intel Xeon 6 as Host CPU Option for x86-Based AI Inference

At NVIDIA GTC 2026, Intel announced that its Intel Xeon 6 processors are being used as the host CPUs for NVIDIA DGX Rubin NVL8 systems. This design win extends the established use of Xeon within NVIDIA’s GPU platforms and underscores the processor’s role in orchestrating large-scale, GPU-accelerated AI infrastructure. As AI workloads transition toward massive,

AI  ◇  Enterprise

HPE Cray GX5000 and AI Factory Get NVIDIA Vera Rubin NVL72, Quantum-X800 InfiniBand, and New Blackwell Options

HPE has unveiled updates to the NVIDIA AI Computing by HPE portfolio to support large-scale AI factories and next-generation supercomputers. The offerings combine compute, GPUs, networking, liquid cooling, software, and services into full‑stack solutions designed for at‑scale and sovereign environments. NVIDIA AI Integrated into HPE Exascale Supercomputing Platform. Argonne National Laboratory, HLRS, Hudson River Trading,

AI  ◇  Client Accessories  ◇  Cloud  ◇  Enterprise  ◇  Workstation

Lenovo Expands Hybrid AI Advantage with NVIDIA at GTC 2026: New Inference Platforms, Workstations, and Rack-Scale AI Cloud

At NVIDIA GTC, Lenovo introduced an expanded phase of its Lenovo Hybrid AI Advantage with NVIDIA, positioning the portfolio as an end-to-end path for production AI inferencing across client devices, enterprise infrastructure, and large-scale AI cloud deployments. The announcement centers on accelerating AI adoption, cutting time-to-first-token (TTFT), and improving per-token economics as organizations move from