Home Enterprise Silicon Mechanics zStax StorCore 104 Review (NexentaStor 4)

Silicon Mechanics zStax StorCore 104 Review (NexentaStor 4)

by StorageReview Enterprise Lab

The Silicon Mechanics zStax StorCore 104 is a unified file and block storage platform with multi-tier architecture that scales up to petabytes of storage and is designed for archiving over a long period of time, shared-file access, backend storage for virtualized environments, and high availability applications. Silicon Mechanics NexentaStor-based SDS solutions enable organizations to pick and choose precisely the amount and types of required storage and networking necessary to meet operational requirements. Designed to scale to meet just about any need, the zStax StorCore 104 can be configured with either 1 or 2 controller nodes and up to 1.5-3PB of storage capacity depending on its usecase.


The Silicon Mechanics zStax StorCore 104 is a unified file and block storage platform with multi-tier architecture that scales up to petabytes of storage and is designed for archiving over a long period of time, shared-file access, backend storage for virtualized environments, and high availability applications. Silicon Mechanics NexentaStor-based SDS solutions enable organizations to pick and choose precisely the amount and types of required storage and networking necessary to meet operational requirements. Designed to scale to meet just about any need, the zStax StorCore 104 can be configured with either 1 or 2 controller nodes and up to 1.5-3PB of storage capacity depending on its usecase.

The zStax StorCore 104 has a sizeable list of standard features and implementations that deliver performance and simplified usability thanks to the underlying ZFS file system. The system delivers unlimited snapshots, inline compression and deduplication, a web-based GUI (CLI commands optional), non-disruptive maintenance and upgrades, thin provisioning, automatic data integrity checksumming, copy on write, hybrid configuration support, and regular data scrubbing.

On the hardware side, the StorCore 104 leverages commodity hardware including Intel Romley-based technology, a minimum of 128GB of memory (with a maximum up to 1024TB across two nodes), gigabit Ethernet with optional 10Gb Ethernet and Fibre Channel connectivity, and 5x PCIe slots to further expand functionality. For storage, Silicon Mechanics offers SAS 6Gb/s HDDs as well as MLC NAND SSDs. Organizations can opt to leverage performance or capacity with this option, since the HDDs come in the 7K, 10K and 15K classes (Seagate Constellation ES.3, Seagate Savvio 10K.6 and Seagate Cheetah 15K.7, respectively). The current shipping SSDs are SanDisk Optimus models, although at the time of our review, our unit was configured with sTec ZeusIOPS models. Of course all of the hardware is able to be upgraded or repurposed, one of the core tenants of SDS and commodity-based solutions.

For our review in the StorageReview Enterprise Test Lab, Silicon Mechanics delivered a dual-controller configuration, single disk shelf cluster that incorporates 2x dual-core Intel Xeon E5-2620 processors, 256GB of memory per controller node, and 1GbE, 10GbE and Fibre Channel connectivity. For storage, our system is leveraging a range of drives to match different applications required for our testing environment. For primary write caching needs, we have 2x 8GB sTec ZeusRAM, while secondary read caching is handled by 2x 200GB sTec ZeusIOPS. Our read/write performance and overall capacity is being delivered by 24x 600GB Seagate Cheetah 15K.7 HDDs.

Nexenta as an SDS solution requires a partner to assemble, deploy and support the combined solution. They have dozens of partners but Silicon Mechanics is one of their top providers. When Silicon Mechanics sells a zStax solution, deployment and 24/7 support are included in the price. This includes walking customers through the setup process, light configuration, deployment and support. zStax systems may also be configured to automatically notify Silicon Mechanics of and issues, so they may be resolved in many cases before the storage admin notices an impact.

The Silicon Mechanics zStax StorCore 104 is available now with a standard three-year warranty (extendable). Our configuration carries a list price of just under $40,000.

Silicon Mechanics zStax StorCore 104 Specifications

  • Performance (per node, two nodes in our system)
    • Processor: Intel Dual-Processor Xeon E5-2620 (up to E5-2670)
    • System Memory: 16x 16GB – 256GB – ECC Memory (Up to 512GB)
  • Controller Node
    • Power Supply: Redundant 740W Power Supply – 80 PLUS Platinum Certified
  • Disk Shelves
    • Platform: Dual Expander – 3U – 28 Drive Bays (up to 4U – 45 bays)
    • Power Supply: Redundant 1620W Power Supply – 80 PLUS Platinum Certified
    • Rail Kit Quick-Release Rail Kit – Square Holes – 26.5″ to 36.4″
  • Capacity
    • Primary Write Cache: 2x 8GB sTec ZeusRAM
    • Secondary Read Cache: 2x 200GB sTec ZeusIOPS
    • Seagate Cheetah 15K.7 (24x 600GB)
  • Management
    • Web-based management interface
    • Inline compression and deduplication
    • Unlimited snapshots
    • Unlimited file sizes
  • Storage Protocols
    • Block: iSCSI, Fibre Channel
    • File: CIFS, NFS
  • Network Protocols
    • Ethernet: 1GbE standard, 10GbE optional, 40GbE optional
    • Fibre Channel: 4Gb/s FC, 8Gbps FC, 16Gb/s FC (all optional)
  • Physical Characteristics
    • PCIe expansion slots: 5
    • Minimum rack units: 5
    • Maximum primary read cache size: 384
    • Open drive bays: Unlimited
    • Maximum capacity: Unlimited
    • Modular, peta-scale architecture
  • Standard hardware warranty: 3-year 24/7

Design and Build

The Silicon Mechanics zStax StorCore 104 system in our labs is composed of two controller units and a single disk shelf. The design is sleek, and the black color scheme will blend in with other hardware in a server rack. The front of the units are heavily ventilated to ensure proper cooling. There are 16 slots on the front of the device. Each slot has an indicator light on the right hand side, solid blue to show the drive is available, blinking blue to show the drive is being accessed, and blinking red to indicate the location of a requested drive. At the bottom of the drive slot is the drive location number and the drive eject button.

The controller nodes are primarily populated by the connectivity. On the left side, there are two redundant power supplies with handles for easy maintenance. Each power supply has an ejection button in the top left hand corner and can be removed individually. The center features a module with serial, USB and Ethernet connectivity, and the right side houses the six (5 usable) PCIe slots for additional network connectivity or storage node expansion.

The 3U disk shelf provides 28x bays to maximize storage capacity, and the device comes with a quick-release rail kit for racking. Larger 45- and 60-bay disk shelves are also an option.

Management

NexentaStor can be managed through the Nexenta Management View (NMV). The NMV is a GUI that supports the popular browsers. Once users go to the NMV IP address they can log in the upper right corner. The layout is fairly simple with four main tabs running across the top: Status, Settings, Data Management, and Analytics. Clicking on each tab opens up a new window allowing users to do actions such as managing volumes (disks and JBODs), folders, users, amongst other actions.

Drilling into individual node status or cluster status is pretty easy, although you need to keep track of which node you are logged into by the name if you aren’t familiar of the exact IP address. The host you are logged into is reported at the top of each page, which in this case is zstax01.

CIFS and NFS shares can be viewed, created, and modified through the Shares tab. This section gets slightly more complicated than other management interfaces we’ve seen, although the level of customization is much higher than other areas. For the power user that knows exactly what they want to do, this interface doesn’t hold anything back.

Drilling into the management section for the disk pools, users can see second-by-second drive activity in each pool, down to the individual disk level. This is helpful to spot problems as well as make sure your environment is balanced. Through this section you can also make real-time adjustments to inline compression and dedupe settings to look at their impact on performance.

Testing Background and Comparables

We publish an inventory of our lab environment, an overview of the lab’s networking capabilities, and other details about our testing protocols so that administrators and those responsible for equipment acquisition can fairly gauge the conditions under which we have achieved the published results. To maintain our independence, none of our reviews are paid for or managed by the manufacturer of equipment we are testing.

We will be comparing the zStax StorCore 104 to the StorTrends 3500i, X-IO ISE 710Dot Hill Assured SAN Ultra48, and NetApp FAS2240-2.

With each hybrid platform we test, it is very important to understand how each vendor configures the unit for different workloads as well as the networking interface used for testing. The amount of flash used is just as important as the underlying caching or tiering process when it comes to how well it will perform in a given workload. The following list shows the amount of flash and HDD, how much is usable in our specific configuration and what networking interconnects were leveraged:

  • Silicon Mechanics zStax StorCore 104
    • List price: $39,778
    • Cache: 2x 256GB (16x 16GB Registered ECC Memory)
    • HDD: 14.4TB (600GB 15K HDD x24)
    • Nexenta 4.0
  • Dot Hill Assured SAN Ultra84
    • List price: $79,000
    • 14.4TB HDD (4x 600GB 10K HDD x12 RAID10) or 24TB HDD (4x 600GB 10K HDD x12 RAID50)
    • Network Interconnect: 16Gb FC, 4x 16Gb FC per controller
  • AMI StorTrends 3500i
    • List price: $87,999
    • Flash Cache: 200GB (200GB SSDs x2 RAID1)
    • Flash Tier: 1.6TB usable (800GB SSDs x4 RAID10)
    • HDD: 10TB usable (2TB HDDs x10 RAID10)
    • Network Interconnect: 10GbE iSCSI, 2x 10GbE Twinax per controller
  • X-IO ISE 710
    • List price: $115,000
    • 800GB Flash (200GB SSDs x10 RAID10)
    • 3.6TB HDD (300GB 10K HDD x30 RAID10)
    • Network Interconnect: 8Gb FC, 2x 8Gb FC per controller
  • NetApp FAS2240-2
    • HDD: 10.8TB usable (600GB 10K HDDs x12 RAID6 per controller x2)
    • Network Interconnect: 10GbE iSCSI, 2x 10GbE Twinax per controller

Each of the comparable arrays was also benchmarked with our Lenovo ThinkServer RD630 Testbed:

  • 2x Intel Xeon E5-2690 (2.9GHz, 20MB Cache, 8-cores)
  • Intel C602 Chipset
  • Memory – 16GB (2x 8GB) 1333MHz DDR3 Registered RDIMMs
  • Windows Server 2008 R2 SP1 64-bit, Windows Server 2012 Standard, CentOS 6.3 64-bit
  • Boot SSD: 100GB Micron RealSSD P400e
  • LSI 9211-4i SAS/SATA 6.0Gb/s HBA (For boot SSDs)
  • LSI 9207-8i SAS/SATA 6.0Gb/s HBA (For benchmarking SSDs or HDDs)
  • Emulex LightPulse LPe16202 Gen 5 Fibre Channel (8GFC, 16GFC or 10GbE FCoE) PCIe 3.0 Dual-Port CFA

Mellanox SX1036 10/40Gb Ethernet Switch and Hardware

  • 36x 40GbE Ports (Up to 64x 10GbE Ports)
  • QSFP splitter cables 40GbE to 4x10GbE

Application Performance Analysis

StorageReview’s Microsoft SQL Server OLTP testing protocol employs the current draft of the Transaction Processing Performance Council’s Benchmark C (TPC-C), an online transaction processing benchmark that simulates the activities found in complex application environments. The TPC-C benchmark comes closer than synthetic performance benchmarks to gauging the performance strengths and bottlenecks of storage infrastructure in database environments. Our SQL Server protocol uses a 685GB (3,000 scale) SQL Server database and measures the transactional performance and latency under a load of 30,000 virtual users and then again with a half-size database of 15,000 virtual users.

During a load of 15k virtual users, the zStax StorCore 104 with one controller was second-to-last with 2604.69TPS. The top performer was the AMI StorTrends 3500i with 3152.24TPS

When looking at the average latencies, we see similar results to the above. The zStax StorCore 104 with one controller was second-to-last again with1019ms and the AMI StorTrends 3500i was again the top performer with 15ms.

When increasing the workload to 30,000 virtual users we tested the zStax StorCore 104 with both one and two controllers. This time we saw the zStax StorCore 104 with two controllers come in at third with 5188.8TPS. The top performer was AMI StorTrends 3500i with 6272.4TPS.

The results of the average latency benchmark showed similar results to those above with the zStax StorCore 104 with two controllers coming in at third with an average latency of 1039ms. Again, the top performer was AMI StorTrends 3500i with 41ms.

Enterprise Synthetic Workload Analysis

Prior to initiating each of the fio synthetic benchmarks, our lab preconditions the device into steady-state under a heavy load of 16 threads with an outstanding queue of 16 per thread. Then the storage is tested in set intervals with multiple thread/queue depth profiles to show performance under light and heavy usage.

Preconditioning and Primary Steady-State Tests:

  • Throughput (Read+Write IOPS Aggregated)
  • Average Latency (Read+Write Latency Averaged Together)
  • Max Latency (Peak Read or Write Latency)
  • Latency Standard Deviation (Read+Write Standard Deviation Averaged Together)

This synthetic analysis incorporates two profiles, which are widely used in manufacturer specifications and benchmarks:

  • 4k – 100% Read and 100% Write
  • 8k – 100% Read and 100% Write
  • 8k – 70% Read/30% Write
  • 128k – 100% Read and 100% Write

Our first benchmark measures the performance of random 4k transfers comprised of 100% write and 100% read activity. The zStax StorCore 104 achieved a read throughput of 147,585IOPS with 25G LUNs and 14,811IOPS with 250G LUNs. The write throughputs were 12,201IOPS at 25G LUNs and 4,505IOPS with 250G LUNs.

With average latency, the zStax StorCore 104 25G LUNs had a read speed of 1.73ms and a write speed of 20.98ms. The 250G LUNs had a write speed almost triple (56.8ms) and a read speed that was eight times higher (17.28ms).

With maximum latency we see a dramatic difference in the to zStax StorCore 104 setups. With the 25G LUNs the read speed was 57.97ms and the write speed was 557.28ms. With the 250G LUNs the read speed was 4,571.4ms (almost 80 times higher than the 25G LUNs) and a write speed of 14,597ms (over 25 times higher).

Our standard deviation benchmark shows similar standings to those above. The zStax StorCore 104 25G LUNs had a read speed of 2.11ms and a write speed of 34.45ms while the 250G LUNs had a read speed of 49.29ms and a write speed of 290.17ms.

After reconditioning the array for 8k workloads, we measured the zStax StorCore 104 throughput with a load of 16 threads and a queue depth of 16 for 100% read and 100% write operations. The zStax StorCore 104 achieved a read throughput of 158,960IOPS with 25G LUNs and 145,602IOPS with 250G LUNs. The write throughputs were 127,134IOPS at 25G LUNs and 85,225IOPS with 250G LUNs.

The next results are derived from a protocol composed of 70% read operations and 30% write operations with an 8k workload across a range of thread and queue counts. In terms of throughput, unsurprisingly the 25G LUNs out performed the 250G LUNs peaked at 41,602IOPS in higher queue depths.

The average latency results during the 8k 70/30 benchmark mirror the throughput results. The 25G LUNs had both lower speeds and ran more consistently throughout.

With max latency, the 25G LUNs had a very consistent low speed while the 250G LUNs jumped all over the place.

Standard deviation calculations for the 8k 70/30 benchmark reveal no surprises. Again the 25G LUNs had a very consistent low speed and the 250G LUNs had several spikes.

Our final synthetic benchmark is based on 128k transfers with 100% read and 100% write operations. Here we see that both setups ran neck and neck with the 25G LUNs barely edging out the 250G LUNs. The 25G LUNs had a read throughput of 2,081,484KB/s and a write throughput of 1,432,781KB/s and the 250G LUNs had a read throughput of 2,060,800KB/s and a write throughput of 1,361,100KB/s.

Conclusion

Silicon Mechanics zStax StorCore 104 is a unified storage appliance based on Nexenta’s 4.0 SDS solution. The main selling points of the appliance are its very high scalability and its ability to be tailored to specific businesses for specific reasons thanks to the underlying commodity hardware. It comes with 2x Intel Xeon E5-2620 or E5-2670 processors per controller, a maximum of 512GB of RAM for primary read cache, 4x 1GbE ports (with optional 10GbE and 40GbE connections), and 28x drive bays in its starting 3U JBOD chassis. The zStax StorCore 104 is aimed at businesses that need enterprise-grade data services with support for block and file protocols that want to eliminate vendor lock-in.

Looking at performance, we saw the zStax StorCore 104 run in the middle- to lower-end of the pack in SQL server testing protocol. On our enterprise synthetic workloads the zStax StorCore 104 had a maximum read throughput of 147,585IOPS read and 12,201IOPS write with a 4K file size. At a 8K file size we saw the zStax StorCore 104 reach a throughput of 145,602IOPS read 127,134IOPS write. Peak bandwidth of the zStax StorCore 104 measured at 2.1Gb/s read and 1.4Gb/s write.

For customers looking at this part of the storage market, the array of offerings can be confusing thanks to the dozens of options from vendors large and small, hardware-centric and SDS. This particular Nexenta solution is compelling in that it provides a large enterprise feature set (HA, data services, etc.) for a relatively modest price tag (roughly $40,000 for our review configuration). This solution plays best where a deep feature-set and hardware vendor independence takes more weight than market-leading I/O and latency in this price band. It’s not that the zStax is a poor performer in that regard, it’s pretty good on a per-dollar basis, but there are many other offerings that can beat it out in a foot race including entry offerings from the big storage vendors. However, Nexenta solutions offer near-infinite flexibility, something that solutions from the large storage vendors in many cases can’t or don’t offer at a reasonable price point.

For their part, Silicon Mechanics does a great job at harnessing what Nexenta can offer in a solution that’s able to be packaged and marketed as a true midmarket storage solution. Silicon Mechanics offers the service and support necessary for these environments, including 24/7 support for deployment and technical issues as they arise, as well as remote monitoring for proactive issue resolution.

Pros

  • Highly scalable
  • Low entry price point
  • Tailor-built to the customer’s needs on commodity hardware
  • Silicon Mechanics adds a necessary layer of support and configuration consultation

Cons

  • SQL Server performance ranked lower in the group
  • GUI interface can be clunky in spots

Bottom Line

The zStax StorCore 104 is a 7U-starting unified Nexenta-based storage appliance that can be tailor-built for all storage needs. The system has a complete set of enterprise data services, offers very high scalability, and leverages commodity hardware for cost benefits.

zStax StorCore 104 Product Page

Discuss This Review