August 5th, 2015 by Brian Beeler
Why We Don't Have a Nutanix NX-8150 Review
UPDATE August 8 2015: Reply from Lukas Lundell, Global Director of Solutions and Performance Engineering at Nutanix in our forums. This statement included inaccurate remarks that have since been corrected.
StorageReview has been the largest outlet independently reviewing enterprise IT for many years. For the first time we have to tell the story of how a review in progress came to an end without being published. There have been several instances where we were asked to not post a review by a vendor before, but the reviews always went live. This story is significantly different however, as we encountered a new set of behaviors by Nutanix. The summary is this; Nutanix sent an NX-8150 cluster for review, our testing revealed performance issues, Nutanix made several software updates over six months to improve performance, in June they asked to update our cluster with newer Haswell-based systems, we agreed, Nutanix then backtracked and refused to send the promised replacements.
That brief history on its own is simple fact, but in the interest of full disclosure to our readers, we think it best to provide the full details of what happened. The history is important in understanding not just the way Nutanix operated in this case, but how their attitude could adversely affect those navigating the increasingly crowded, and complex world of hyper-converged solutions. Prior to delivering the systems to us, we had multiple conversations about our testing capabilities and methodology. We first deployed the Nutanix NX-8150 cluster on January 20, 2015, assisted by an on-site Nutanix representative.
Initial testing centered on VMmark with Nutanix OS (NOS) 18.104.22.168. With most VMmark runs we start with a midrange data set of 10 tiles to get a feel for the system’s behavior. In this case 10 tiles were attempted multiple times without getting a passing score. To support our testing, Nutanix indicated an intention to replicate a VMmark environment and also suggested the use of one datastore vs many. Nutanix also indicated Storage vMotion would have a performance hit that might not be seen in normal customer deployments. The finalized VMmark environment for hyper-converged took tips from this and leveraged a separate SAN for the single Deploy LUN.
VMmark is an infrastructure workload designed to test how compute, storage and networking resources perform in a virtualized environment at scale. It stresses compute and storage through the load of applications as well as networking with VM to VM traffic and vMotion activities. A detailed breakdown of VMmark and what makes a tile is available here.
For hyper-converged platforms running ESXi, this could be considered one of the best benchmarks as it stresses all components of the cluster and directly shows how VM-dense a solution is before performance starts to suffer. Nutanix disagrees with this benchmark claiming it is CPU/memory bound. Unfortunately however, Nutanix never replicated their own VMmark testing setup to see for themselves that this is not the case.
Toward the end of January, the cluster was upgraded to NOS 4.1.1, with one deliverable being improved performance. We attempted testing using Sysbench, at which point Nutanix expressed concerns on proper deployment for hyper-converged systems. We paused testing here to integrate Nutanix’s feedback into all of our database testing profiles for hyper-converged systems. We also decided to wait for a new NOS software revision that was coming down the pipeline.
In May the cluster was upgraded to NOS 4.1.2. Nutanix indicated a desire to redo testing from our prior tests on earlier software, which we obliged. Again a few tests were attempted, but with 4.1.3 and ESXi 6.0 support coming shortly, we opted to pause testing to allow Nutanix to have the benefits of the more efficient hypervisor and other upgrades in ESXi 6.0.
In June Nutanix held their first annual conference, .NEXT. At the event, Nutanix showed signs they were uncomfortable with performance testing. By this time a VMware Virtual SAN cluster had come into the lab for review, which further made Nutanix defensive about performance testing. Performance engineers from Nutanix did however offer extensive advice at the event; many of those suggestions were rolled into our best practices for testing hyper-converged systems.
Later in the month Nutanix upgraded the cluster to NOS 4.1.3, with support for ESXi 6.0. Sysbench testing was performed again. This software revision and our updated sysbench testing configuration was promising; we were able to double Sysbench performance to 600TPS per node, with excellent transactional balance between each node. There were some latency spikes, but those didn’t appear to impact the overall cluster transaction count. Testing with applications introduces elements such as filesystem and system memory caching that synthetic tests won’t show. Despite being a contributor to the Sysbench testing methodology, Nutanix disagreed with our database testing process. Their solution is to deploy a lightweight VM using vdbench (synthetic workload generator) and to monitor it to see if it would meet some level of defined workload parameters.
As we moved into July, Nutanix asked us to return the NX-8150 cluster with the guidance that they’d like us to review a Haswell-based cluster, which they stated would offer higher performance over the Ivy Bridge-based hardware. We accepted the offer, as it was clear the NX-8150 was nearing the end of its sales window. The tone of the conversations turned quickly at this point. After the NX-8150s were out of the StorageReview lab, all discussion went back to square one on creating a test plan. Nutanix asked that we execute their test plan, however all tests outlined by Nutanix are synthetic instead of application-based. Testing had to be “nailed down” by the performance engineers at Nutanix before they would move forward. The promised new gear to replace the NX-8150s was put on hold; Nutanix refusing to ship until new test plan is agreed upon.
During our good faith back and forth on testing, Nutanix suggested shipping us their own lower-spec VSAN platform and a Nutanix NX-3460-G4 (4x HDD and 2x SSD per node). We declined to accept the low-end configuration and balked at the idea of testing a competitor’s system without that vendor’s approval or involvement. Again we pushed for like to like testing with comparable hardware but were met with obstacles from Nutanix.
At the end of July Nutanix made the last offer of hardware, asking to send us the NX-3460-G4 cluster so we can review the interface and software. The offer explicitly stated for the first time, that if we accepted the cluster, we could not run any performance workloads of any kind against it. Email excerpt:
You are welcome to publish screen shots of the Nutanix Prism interface, of course, to support the evaluation of the above. We kindly ask that you provide Nutanix an opportunity to review all descriptions and commentary of Nutanix product (hardware and software) 72 hours prior to publication for fact checking purposes. (We hope that this is consistent with your existing review process)
While we work to finalize the performance test plan, we ask that you not conduct any testing that measures the performance of the Nutanix product or the performance of any application (including test software) running on the Nutanix platform. Specifically, we ask that you not use any custom-developed or commercial test tools to measure performance, including but not limited to, Sysbench, VMmark, IOmeter and Open LDAP.
We will continue to work with you in good faith to jointly develop a detailed test plan (including methodology and test tools) for future evaluation of Nutanix product performance. Until we have a mutually agreed upon plan, we ask that you not undertake any performance testing of the Nutanix product, or publish results of prior performance testing.
We opted to decline this offer as well, as we believe our credibility with readership relies on a complete story to be told, rather than just the highlights a vendor requests. Further, the test plan currently proposed by Nutanix is fine for learning the system and characterizing lightweight behaviors, but does not show what customers can expect as their demands grow after initial deployment.
Ultimately Nutanix sent us a cluster ready to test with well-understood industry tools that we use on shared storage platforms, saying quite literally “...you can go crazy with the testing.” As soon as Nutanix found out their NX-8150s could be compared against other hardware/software solutions, they retracted the equipment and would not ship new gear unless it was on their terms. I take the blame for allowing this to happen. I thought offering a newer version of the review would be better for our readership. Sadly that didn’t happen. Nutanix now holds a position that their testing plan should be the hyper-converged standard, which is somewhat surprising given their testing plan leverages synthetic testing tools primarily and doesn’t stress cluster performance.
That’s a longer debate though, which we’re happy to have, around the merits and best practices for hyper-converged testing. For now though, we won’t have a Nutanix review but we continue to press forward with many others in the space who are open to real world application testing and independent evaluation.
UPDATE August 8, 2015: A statement from Lukas Lundell of Nutanix can be viewed in our discussion forums here. This statement originally contained inaccurate details in regards to the SSD capacity inside the Nutanix NX-8150 shipped to StorageReview. Lukas has since corrected that comment.