StorageReview.com
AI  ◇  Enterprise

WEKA Integrates NeuralMesh with NVIDIA STX to Address AI Inference Memory Bottlenecks

WEKA announced integration of its NeuralMesh platform with the NVIDIA STX reference architecture, positioning its Augmented Memory Grid as a core component for next-generation AI infrastructure. The combined solution targets one of the primary constraints in large-scale inference environments: memory limitations that impact performance, cost, and scalability. Running on NeuralMesh, WEKA’s Augmented Memory Grid extends

WEKA NeuralMesh Dashboard graphic
AI  ◇  Enterprise

WEKA Announces General Availability of NeuralMesh AIDP

WEKA has announced the general availability of its NeuralMesh AI Data Platform, an enterprise-focused, composable infrastructure designed for AI factory deployments. Built on the NVIDIA AI Data Platform reference architecture, NeuralMesh AIDP provides an integrated stack that delivers AI-ready data to production environments, with a focus on accelerating time-to-deployment for large-scale AI applications. The platform

AI  ◇  Enterprise

IBM and NVIDIA Announce Expanded Partnership to Operationalize Enterprise AI

At GTC 2026, IBM and NVIDIA announced a significant expansion of their more than decade-long partnership, focusing on moving AI from pilot phases to full-scale production. The collaboration targets several critical bottlenecks in enterprise AI adoption, including GPU-native data analytics, intelligent document processing, and infrastructure for regulated environments. The joint effort aims to provide a

VDURA Global Namespace
AI  ◇  Enterprise  ◇  Enterprise Storage

VDURA Introduces RDMA and Context-Aware Tiering for AI Data Platforms at GTC 2026

During GTC 2026, VDURA showcased updates to its Data Platform that improve GPU utilization and storage efficiency in AI environments. The announcement includes the general availability of Remote Direct Memory Access (RDMA), a preview of its Context-Aware Tiering technology, and validated infrastructure setups based on AMD EPYC Turin CPUs and NVIDIA ConnectX-7 networking. The updates

AI  ◇  Enterprise

NVIDIA Groq 3 LPX: Everything we know

The LPU, or Language Processing Unit, is a custom AI inference accelerator designed and built by Groq, Inc. Founded in 2016 by Jonathan Ross, a former Google engineer credited as one of the original inventors of the TPU, Groq spent years developing a deterministic, software-defined processor architecture from the ground up. Unlike GPUs, which rely

HPE ProLiant Compute DL380a Gen12
AI  ◇  Enterprise

HPE Expands NVIDIA AI Computing Portfolio with Scalable Private Cloud AI and Blackwell GPU Integration

HPE has announced a significant expansion of the NVIDIA AI Computing by HPE portfolio, introducing integrated systems designed to scale enterprise AI deployments while maintaining security and governance. The update focuses on co-engineered, validated architectures intended to accelerate time-to-value for AI inferencing and model development. HPE CEO Antonio Neri and NVIDIA CEO Jensen Huang positioned

AI  ◇  Enterprise

NVIDIA DGX Rubin NVL8 Supports Intel Xeon 6 as Host CPU Option for x86-Based AI Inference

At NVIDIA GTC 2026, Intel announced that its Intel Xeon 6 processors are being used as the host CPUs for NVIDIA DGX Rubin NVL8 systems. This design win extends the established use of Xeon within NVIDIA’s GPU platforms and underscores the processor’s role in orchestrating large-scale, GPU-accelerated AI infrastructure. As AI workloads transition toward massive,

AI  ◇  Enterprise

HPE Cray GX5000 and AI Factory Get NVIDIA Vera Rubin NVL72, Quantum-X800 InfiniBand, and New Blackwell Options

HPE has unveiled updates to the NVIDIA AI Computing by HPE portfolio to support large-scale AI factories and next-generation supercomputers. The offerings combine compute, GPUs, networking, liquid cooling, software, and services into full‑stack solutions designed for at‑scale and sovereign environments. NVIDIA AI Integrated into HPE Exascale Supercomputing Platform. Argonne National Laboratory, HLRS, Hudson River Trading,

AI  ◇  Client Accessories  ◇  Cloud  ◇  Enterprise  ◇  Workstation

Lenovo Expands Hybrid AI Advantage with NVIDIA at GTC 2026: New Inference Platforms, Workstations, and Rack-Scale AI Cloud

At NVIDIA GTC, Lenovo introduced an expanded phase of its Lenovo Hybrid AI Advantage with NVIDIA, positioning the portfolio as an end-to-end path for production AI inferencing across client devices, enterprise infrastructure, and large-scale AI cloud deployments. The announcement centers on accelerating AI adoption, cutting time-to-first-token (TTFT), and improving per-token economics as organizations move from

AI  ◇  Enterprise

Dell Expands AI Factory with NVIDIA at GTC 2026: New Data Engines, Lightning File System, and Exascale Storage

Dell Technologies has introduced the Dell AI Data Platform, a set of data and storage technologies aligned with NVIDIA’s AI ecosystem, designed to help enterprises move from AI pilots to production-scale, agentic systems. The platform is designed to address a familiar constraint in enterprise AI: data that is too slow, siloed, or poorly governed to