Enterprise

Hewlett Packard Enterprise and NVIDIA Announce Co-Developed AI Solutions

Hewlett Packard Enterprise (HPE) and NVIDIA have introduced NVIDIA AI Computing by HPE, a portfolio of co-developed AI solutions designed to accelerate the adoption of generative AI in enterprises. This portfolio includes several integrated offerings and services to enable businesses of all sizes to efficiently develop and deploy AI applications.

Hewlett Packard Enterprise (HPE) and NVIDIA have introduced NVIDIA AI Computing by HPE, a portfolio of co-developed AI solutions designed to accelerate the adoption of generative AI in enterprises. This portfolio includes several integrated offerings and services to enable businesses of all sizes to efficiently develop and deploy AI applications.

There was a lot to announce during the HPE Discover keynote. HPE President and CEO Antonio Neri was joined on stage by NVIDIA founder and CEO Jensen Huang to highlight this initiative, marking the expansion of a decades-long partnership.

Neri emphasized that generative AI holds immense potential for enterprise transformation, but the complexities of fragmented AI technology pose risks and barriers to large-scale adoption. To address these challenges, HPE and NVIDIA have co-developed a turnkey private cloud for AI, enabling enterprises to focus their resources on developing new AI use cases that boost productivity and unlock new revenue streams. Huang highlighted that the integration of NVIDIA and HPE technologies equips enterprise clients and AI professionals with the most advanced computing infrastructure and services to expand the frontier of AI.

Key Offerings and Features

HPE Private Cloud AI: Among the portfolio’s key offerings is HPE Private Cloud AI, a first-of-its-kind solution that deeply integrates NVIDIA AI computing, networking, and software with HPE’s AI storage, compute, and the HPE GreenLake cloud. This solution provides enterprises with an energy-efficient, fast, and flexible path for sustainably developing and deploying generative AI applications. It is powered by the new OpsRamp AI copilot, which helps IT operations improve workload and IT efficiency. HPE Private Cloud AI includes a self-service cloud experience with full lifecycle management and is available in four right-sized configurations to support a broad range of AI workloads and use cases.

Joint Go-to-Market Strategy: All NVIDIA AI Computing by HPE offerings and services will be available through a joint go-to-market strategy. This strategy spans sales teams and channel partners, training, and a global network of system integrators. These integrators include Deloitte, HCLTech, Infosys, TCS, and Wipro, all of whom can help enterprises across various industries run complex AI workloads.

Co-Developed Private Cloud AI Portfolio

HPE Private Cloud AI delivers a unique, cloud-based experience to accelerate innovation and return on investment while managing enterprise risk from AI. This solution supports inference, fine-tuning, and retrieval-augmented generation (RAG) AI workloads that utilize proprietary data. It ensures enterprise control over data privacy, security, transparency, and governance requirements. Additionally, it offers a cloud experience with IT operations (ITOps) and artificial intelligence operations (AIOps) capabilities to increase productivity. HPE Private Cloud AI provides a fast path to consume flexibly to meet future AI opportunities and growth.

The foundation of the AI and data software stack starts with the NVIDIA AI Enterprise software platform, which includes NVIDIA NIM inference microservices. This platform accelerates data science pipelines and streamlines the development and deployment of production-grade copilots and other generative AI applications. NVIDIA NIM delivers easy-to-use microservices for optimized AI model inferencing, offering a smooth transition from prototype to secure deployment of AI models in various use cases. Complementing NVIDIA AI Enterprise and NVIDIA NIM, HPE AI Essentials software delivers a ready-to-run set of curated AI and data foundation tools with a unified control plane. This software provides adaptable solutions, ongoing enterprise support, and trusted AI services, ensuring AI pipelines are compliant, explainable, and reproducible throughout the AI lifecycle.

Integrated AI Infrastructure

HPE Private Cloud AI includes a fully integrated AI infrastructure stack that features NVIDIA Spectrum-X Ethernet networking, HPE GreenLake for File Storage, and HPE ProLiant servers. These servers support NVIDIA L40S, NVIDIA H100 NVL Tensor Core GPUs, and the NVIDIA GH200 NVL2 platform, delivering optimal performance for the AI and data software stack.

Cloud Experience Enabled by HPE GreenLake Cloud

HPE Private Cloud AI offers a self-service cloud experience enabled by HPE GreenLake Cloud. Through a single, platform-based control plane, HPE GreenLake Cloud services provide manageability and observability to automate, orchestrate, and manage endpoints, workloads, and data across hybrid environments. This includes sustainability metrics for workloads and endpoints.

Integrating OpsRamp’s IT operations with HPE GreenLake cloud will deliver observability and AIOps to all HPE products and services. OpsRamp provides observability for the end-to-end NVIDIA accelerated computing stack, including NVIDIA NIM and AI software, NVIDIA Tensor Core GPUs, and AI clusters, as well as NVIDIA Quantum InfiniBand and NVIDIA Spectrum Ethernet switches. IT administrators can gain insights to identify anomalies and monitor their AI infrastructure and workloads across hybrid, multi-cloud environments.

The new OpsRamp operations copilot utilizes NVIDIA’s accelerated computing platform to analyze large datasets for insights with a conversational assistant, boosting productivity for operations management. OpsRamp will also integrate with CrowdStrike APIs, allowing customers to see a unified service map view of endpoint security across their entire infrastructure and applications.

Collaboration with Global System Integrators

To advance enterprises’ time to value, Deloitte, HCLTech, Infosys, TCS, and Wipro support the NVIDIA AI Computing by HPE portfolio and HPE Private Cloud AI. These global system integrators will help enterprises develop industry-focused AI solutions and use cases with clear business benefits.

Support for NVIDIA’s Latest Technologies

HPE will support NVIDIA’s latest GPUs, CPUs, and Superchips. The HPE Cray XD670 will support eight NVIDIA H200 NVL Tensor Core GPUs and is ideal for large language model (LLM) builders. The HPE ProLiant DL384 Gen12 server with dual NVIDIA GH200 NVL2 is ideal for LLM consumers using larger models or retrieval-augmented generation (RAG). The HPE ProLiant DL380a Gen12 server will support up to eight NVIDIA H200 NVL Tensor Core GPUs, providing flexibility to scale generative AI workloads.

HPE did not show off any NVL72 type systems, or Blackwell, but has said they will be “time to market” to support the NVIDIA GB200 NVL72 / NVL2, including NVIDIA Blackwell, NVIDIA Rubin, and NVIDIA Vera architectures.

HPE Cray XD670

High-Density File Storage

HPE GreenLake for File Storage has achieved NVIDIA DGX BasePOD certification and NVIDIA OVX storage validation, offering a proven enterprise file storage solution for AI, generative AI, and GPU-intensive workloads at scale. HPE will be time to market on upcoming NVIDIA reference architecture storage certification programs.

Availability

HPE Private Cloud AI is expected to be generally available in the fall. The HPE ProLiant DL380a Gen12 server with NVIDIA H200 NVL Tensor Core GPUs is expected to be generally available in the fall. The HPE ProLiant DL384 Gen12 server with dual NVIDIA GH200 NVL2 is expected to be generally available in the fall. The HPE Cray XD670 server with NVIDIA H200 NVL is expected to be generally available in the summer.

This collaboration between HPE and NVIDIA signifies an expansion of their extensive partnership. This initiative aims to equip enterprises with the most advanced computing infrastructure and services to expand the frontier of AI.

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | TikTok | RSS Feed

Jordan Ranous

AI Specialist; navigating you through the world of Enterprise AI. Writer and Analyst for Storage Review, coming from a background of Financial Big Data Analytics, Datacenter Ops/DevOps, and CX Analytics. Pilot, Astrophotographer, LTO Tape Guru, and Battery/Solar Enthusiast.

Recent Posts

IBM Expands Enterprise AI Offerings with Mistral Large on watsonx.ai

IBM has announced the addition of Mistral Large to its watsonx.ai platform, reinforcing its commitment to making high-quality, performant foundation…

1 day ago

AWS Graviton4-Powered EC2 R8g Instances Go GA

AWS EC2 R8g instances powered by the latest Graviton4 processors are GA! (more…)

4 days ago

VAST Data Platform Earns High-Performance Storage Solution Certification for NVIDIA Partner Network Cloud Partners

VAST Data achieves certification as a high-performance storage solution for NVIDIA Partner Network cloud partners. (more…)

4 days ago

Giga Computing Unveils Advanced AI and Cooling Solutions

Giga Computing, a subsidiary of GIGABYTE, recently introduced solutions to tackle complex AI workloads and improve energy efficiency through advanced…

4 days ago

Azure Stack HCI: Microsoft’s Hybrid Cloud Infrastructure Solution, Release 23H2

Azure Stack HCI represents Microsoft's cutting-edge approach to hyperconverged infrastructure (HCI), offering a seamless blend of on-premises computing power with…

4 days ago

Geyser Data—Tape Backup as-a-Service

Geyser Data offers a cost-effective backup solution using Spectra Cube tape library, LTO-9 media, and BlackPearl S3-compatible object storage. (more…)

4 days ago