StorageReview.com
AI  ◇  Enterprise

WEKA Integrates NeuralMesh with NVIDIA STX to Address AI Inference Memory Bottlenecks

WEKA announced integration of its NeuralMesh platform with the NVIDIA STX reference architecture, positioning its Augmented Memory Grid as a core component for next-generation AI infrastructure. The combined solution targets one of the primary constraints in large-scale inference environments: memory limitations that impact performance, cost, and scalability. Running on NeuralMesh, WEKA’s Augmented Memory Grid extends

WEKA NeuralMesh Dashboard graphic
AI  ◇  Enterprise

WEKA Announces General Availability of NeuralMesh AIDP

WEKA has announced the general availability of its NeuralMesh AI Data Platform, an enterprise-focused, composable infrastructure designed for AI factory deployments. Built on the NVIDIA AI Data Platform reference architecture, NeuralMesh AIDP provides an integrated stack that delivers AI-ready data to production environments, with a focus on accelerating time-to-deployment for large-scale AI applications. The platform

HP Z8 Fury G6i angle right
Consumer  ◇  Workstation

HP Expands Z Workstation Lineup With New Systems for AI, Mobile Work, and Hybrid IT

HP introduced a new round of Z workstations and AI systems at HP Imagine 2026, expanding its high-performance computing lineup for engineers, architects, designers, AI developers, and other professional users working with heavier local compute demands. The update covers desktop and mobile workstations, GPU-sharing tools, and new systems intended to support hybrid AI infrastructure across

NVIDIA HGX Rubin NVL8
Enterprise  ◇  Server

ASRock Rack Unveils Liquid-Cooled AI Systems Built Around NVIDIA Rubin and Blackwell at GTC 2026

ASRock Rack used NVIDIA GTC 2026 to introduce a broader range of liquid-cooled AI platforms for high-density enterprise and data center deployments. The announcement centered on new systems based on the NVIDIA HGX Rubin NVL8 platform, as well as NVIDIA MGX-based servers designed for the NVIDIA RTX PRO 4500 Blackwell Server Edition and liquid-cooled RTX

AI  ◇  Enterprise

IBM and NVIDIA Announce Expanded Partnership to Operationalize Enterprise AI

At GTC 2026, IBM and NVIDIA announced a significant expansion of their more than decade-long partnership, focusing on moving AI from pilot phases to full-scale production. The collaboration targets several critical bottlenecks in enterprise AI adoption, including GPU-native data analytics, intelligent document processing, and infrastructure for regulated environments. The joint effort aims to provide a

VDURA Global Namespace
AI  ◇  Enterprise  ◇  Enterprise Storage

VDURA Introduces RDMA and Context-Aware Tiering for AI Data Platforms at GTC 2026

During GTC 2026, VDURA showcased updates to its Data Platform that improve GPU utilization and storage efficiency in AI environments. The announcement includes the general availability of Remote Direct Memory Access (RDMA), a preview of its Context-Aware Tiering technology, and validated infrastructure setups based on AMD EPYC Turin CPUs and NVIDIA ConnectX-7 networking. The updates

AI  ◇  Enterprise

NVIDIA Groq 3 LPX: Everything we know

The LPU, or Language Processing Unit, is a custom AI inference accelerator designed and built by Groq, Inc. Founded in 2016 by Jonathan Ross, a former Google engineer credited as one of the original inventors of the TPU, Groq spent years developing a deterministic, software-defined processor architecture from the ground up. Unlike GPUs, which rely

HPE AI Grid image
Enterprise  ◇  Networking

HPE Introduces AI Grid to Connect AI Factories and Distributed Inference Clusters Using NVIDIA Reference Architecture

HPE has announced the HPE AI Grid, a comprehensive infrastructure solution aligned with the NVIDIA AI Grid reference architecture. It is designed to securely connect AI factories and distributed inference clusters across regional and remote edge locations. HPE positions this platform for service providers that need to deploy and manage thousands of distributed inference sites

HPE ProLiant Compute DL380a Gen12
AI  ◇  Enterprise

HPE Expands NVIDIA AI Computing Portfolio with Scalable Private Cloud AI and Blackwell GPU Integration

HPE has announced a significant expansion of the NVIDIA AI Computing by HPE portfolio, introducing integrated systems designed to scale enterprise AI deployments while maintaining security and governance. The update focuses on co-engineered, validated architectures intended to accelerate time-to-value for AI inferencing and model development. HPE CEO Antonio Neri and NVIDIA CEO Jensen Huang positioned