by Brian Beeler

EMC VxRack Node powered by ScaleIO: Scaled Sysbench OLTP Performance Review (2-layer)

In our first segment of the VxRack Node review, we covered over the deployment options, primary management interface overview and a look at the hardware behind our all flash performance nodes from VCE, the Converged Platforms Division of EMC. In this portion of the review we take a look at VxRack Nodes in a two-layer SAN configuration and how it performs under our MySQL Sysbench workload. We pressed ScaleIO and the underlying hardware to 99.2% capacity to evaluate performance as the workload intensity and capacity footprint increased. Our objective is to measure the Nodes' performance potential when it comes to being able to deliver high-speed transactional performance, including throughput and latency, over an ever-demanding workload scale in our virtualized environment.

VCE VxRack Node (Performance Compute All Flash PF100) Specifications

  • Chassis - # of Node: 2U-4 node
  • Processors Per Node: Dual Intel E5-2680 V3, 12c, 2.5GHz
  • Chipset: Intel 610
  • DDR4 Memory Per Node: 512GB (16x 32GB)
  • Embedded NIC Per Node: Dual 1-Gbps Ethernet ports + 1 10/100 management port
  • RAID Controller Per Node: 1x LSI 3008
  • SSDs Per Node: 4.8TB (6x 2.5-inch 800GB eMLC)
  • SATADOM Per Node: 32GBSLC
  • 10GbE Port Per Node: 4x 10Gbps ports SFP+
  • Power Supply: Dual 1600W platinum PSU AC
  • Router: Cisco Nexus C3164Q-40GE

Dell PowerEdge R730 Virtualized MySQL 4-8 node Cluster

  • Eight-sixteen Intel E5-2690 v3 CPUs for 249GHz in cluster (Two per node, 2.6GHz, 12-cores, 30MB Cache) 
  • 1-2TB RAM (256GB per node, 16GB x 16 DDR4, 128GB per CPU)
  • SD Card Boot (Lexar 16GB)
  • 4-8 x Mellanox ConnectX-3 InfiniBand Adapter (vSwitch for vMotion and VM network)
  • 4-8 x Emulex 16GB dual-port FC HBA
  • 4-8 x Emulex 10GbE dual-port NIC
  • VMware ESXi vSphere 6.0 / Enterprise Plus 8-CPU
  • 10GbE Switching Hardware
    • Front-End Ports: Mellanox SX1036 10/40GbE Switch
    • Back-End Ports: Cisco Nexus 3164 10/40GbE Switch

Sysbench Performance

Each Sysbench VM is configured with three vDisks, one for boot (~92GB), one with the pre-built database (~447GB) and the third for the database under test (270GB). In previous tests we allocated 400GB to the database volume (253GB database size), although to pack additional VMs onto the VxRack Node we shrunk that allocation down to make more room. From a system resource perspective, we configured each VM with 16 vCPUs, 60GB of DRAM and leveraged the LSI Logic SAS SCSI controller. Load gen systems are Dell R730 servers; we range from four to eight in this review, scaling servers per 4VM group.

Sysbench Testing Configuration (per VM)

  • CentOS 6.3 64-bit
  • Storage Footprint: 1TB, 800GB used
  • Percona XtraDB 5.5.30-rel30.1
    • Database Tables: 100
    • Database Size: 10,000,000
    • Database Threads: 32
    • RAM Buffer: 24GB
  • Test Length: 3 hours
    • 2 hours preconditioning 32 threads
    • 1 hour 32 threads

Out of the gate with 4VMs, the VxRack Nodes posted a total of nearly 4,000 transactions, which is a touch lower than XIO ISE-860 SAN storage also configured with all-flash and about 60% faster than a hybrid Nutanix 4-node configuration. All of the VxRack Nodes performed nearly equally, delivering about 1,000 transactions each. As the workload scales larger, ScaleIO really starts to differentiate itself. At 8VMs ScaleIO closes the gap with XIO ISE 860 with performance jumping to just over 6,400TPS. At 12VMs it takes the lead by a few hundred measuring 7,488TPS. Here's where it gets really interesting. We've tested 12-16VM loads on other systems but this was where aggregate performance generally leveled off and tapered down. At 16VMs we've hit the upper edge of where the XIO can deliver effectively but ScaleIO keeps going, tacking on a 15% gain, measuring over 9,500TPS. Bumping it up to 20VMs, still no signs of slowing down, now with ScaleIO measuring over 12,000TPS. Four more VMs added into the mix, again like a broken record ScaleIO pushes ahead measuring over 13,800TPS at 24VMs. Bumping to 28VMs ScaleIO chugs along without breaking a beat, now measuring 15,641TPS. With capacity limits removed, ScaleIO pushed to 99.2% utilization with 32VMs, performance of the cluster measured over 17,300TPS when we finally threw in the towel. 

The key learnings here are that the VxNodes increased performance at every step, losing little steam even when at full capacity. Many other SANs would have fallen down with an I/O bottleneck before capacity ran out, where the workload catches up to the hardware's capabilities. Beyond just incredible throughput, another interesting story plays out in just how well ScaleIO maintained application workload latency.

Generally when looking at a storage array, if you pick a heavy workload at some point you will see a bell curve with performance. Performance will start off slow, reach its peak somewhere in the middle, then performance will wind down at the expense of rapidly increasing latency. We never found that point with ScaleIO, even at 99.2% capacity utilization. As our workload kicked off in the 4-8 VM range, ScaleIO jumped from 32 to 39.9ms  average MySQL latency. Compared to the X-IO ISE 860, which measured 29 and 39ms respectively, the VxRack platform had a slightly higher initial response profile. At the 12-32VM range though the tides turned, where ScaleIO delivered unbelievably low and flat MySQL latency. The difference between 12VMs and 32VMs was just under 8ms.

Shifting our focus towards peak latency profiles with the 99th percentile latency view, ScaleIO delivers one of the best profiles an application engineer or web-scale provider could ever hope for. Under increasing workload intensity ScaleIO keeps its calm and doesn't let peak application response times blow up, even at the highest workload intensity we threw at it. What this means for customers, is even under peak or abnormally high-load conditions the ScaleIO platform is able to keep its cool and consistently deliver content; without lag.

Conclusion

As we wrap up our first performance segment on EMC's VxRack Node powered by ScaleIO, we can't help but to be shocked by the level of performance offered. ScaleIO managed to be one of the few platforms to smash it out of the ballpark in all areas of our scaled MySQL test. First, throughput was phenominal, breaking records by an incredibly wide margin... even at near full capacity. Second, application latency remained nearly flat through an ever-increasing testing environment. Third, even under rising application loads ScaleIO managed to keep peak latency in check, which is very important in a web-scale environment where swings in demand could cause other applications to suffer if response times creep too high.

Sure, it's easy to say that the ScaleIO nodes did so well because they're all-flash. As the numbers show though, the system easily coped with the workload at full capacity, something very few flash arrays can do while keeping latency in check at the same time. It's also worth noting that this first performance review highlights the flexibility of ScaleIO as we identified in part 1. It can be deployed as a SAN or hyper-converged on any equipemnt you like, consumed as a VxRack Node in a variety of flavors or as the engineered solution from VCE VxRack System 1000 Series. 

EMC VxRack Node Review: Overview
EMC VxRack Node Powered By ScaleIO: SQL Server Performance Review (2-layer)
EMC VxRack Node Powered By ScaleIO: Synthetic Performance Review (2-layer)
EMC VxRack Node Powered By ScaleIO Review: Synthetic Performance Review (HCI)
EMC VxRack Node Powered By ScaleIO: SQL Server Performance Review (HCI)
EMC VxRack Node Powered By ScaleIO: VMmark Performance Review (HCI)

Discuss This Review

Sign up for the StorageReview newsletter

Related News and Reviews