AMD and Nutanix have formed a multi-year partnership to create an open, full-stack AI infrastructure platform. This collaboration aims to support agentic AI applications in data centers, hybrid cloud, and edge environments. The agreement seeks to deliver a production-ready, high-performance alternative to integrated AI stacks by using open standards and compatible software frameworks.
The partnership combines silicon development with cloud management. By optimizing the Nutanix Cloud and Nutanix Kubernetes Platforms for AMD EPYC CPUs and AMD Instinct GPUs, both companies plan to provide a scalable foundation for AI inference. This integration will include the AMD ROCm software ecosystem and the AMD Enterprise AI platform, resulting in a unified solution backed by a wide range of OEM partners.
Strategic Investment and Engineering Collaboration
As part of the agreement, AMD will invest $150 million in Nutanix common stock at $36.26 per share. Additionally, AMD will allocate up to $100 million to support joint engineering projects and market collaboration. This funding aims to speed up the development and adoption of the shared agentic AI platform. The equity investment is expected to finalize in the second quarter of 2026, pending regulatory approvals and standard closing conditions.
Dan McNamara, senior vice president and general manager of Compute and Enterprise AI at AMD, stated that enterprise customers need the flexibility to run critical models and workloads without compromise. He mentioned that the partnership with Nutanix focuses on creating a scalable, open platform that enables enterprises and service providers to innovate and expand AI deployments in various environments.
Tarkan Maner, President and Chief Commercial Officer at Nutanix, added that the partnership represents a common vision for production-ready AI infrastructure. He stressed that the integrated platforms will be specifically optimized for inference and agentic applications in hybrid enterprise settings.
Advancing the Open Ecosystem for Enterprise AI
The partnership arrives as enterprise AI infrastructure shifts toward inference workloads. Both companies believe that openness is crucial for long-term innovation, requiring infrastructure built on open standards and diverse architectural choices. The first agentic AI platform developed together is expected to launch in late 2026.
As AI inference becomes essential to enterprise computing, the supporting infrastructure must offer performance, efficiency, and simplicity at scale. The co-engineered platform is designed to provide high-performance inference acceleration with AMD Instinct GPUs and high-core-density computing using AMD EPYC processors.
Nutanix Enterprise AI will manage orchestration and unified lifecycle management. This integration aims to help enterprises deploy both open-source and commercial AI models without reliance on proprietary, integrated stacks. The goal is to create a new type of infrastructure capable of handling complex enterprise AI agents, multi-model inference services, and industry-specific intelligent applications.




Amazon