by Kevin OBrien

LSI Nytro WarpDrive WLP4-200 Enterprise PCIe Review

The LSI Nytro WarpDrive WLP4-200 represents LSI's second-generation effort in the enterprise PCIe application acceleration space. LSI builds on an extensive history of enterprise storage products with the newly rebranded line of acceleration products dubbed LSI Nytro. The Nytro family includes the PCIe WarpDrive of course, but also encompasses LSI's Nytro XD caching and Nytro MegaRAID products that leverage intelligent caching with on-board flash for acceleration, offering customers an entire suite of options as they evaluate high-performance storage. The Nytro WarpDrive comes in a variety of configurations, including both eMLC and SLC versions, with capacities ranging from 200GB up to 1.6TB.

Like the WarpDrive SLP-300 predecessor, the new Nytro WarpDrives work in much the same way RAIDing multiple SSDs together. The Nytro WarpDrive uses fewer controllers/SSDs this time around, opting for four instead of six in the original. The controllers have also been updated; the Nytro WarpDrive utilizes four latest-generation LSI SandForce SF-2500 controllers that are paired with SLC or eMLC NAND depending on the model. These SSDs are then joined together in RAID0 through an LSI PCIe to SAS bridge to form a 200GB to 1600GB logical block device. The drive is then presented to the operating system, which in this case could mean multiple Windows, Linux, UNIX variants, with a well-established LSI driver that in many cases is built into the OS itself.

In addition to LSI's renowned host compatibility and stability reputation, the other core technology component of the Nytro WarpDrive are the SandFroce controllers. LSI used the prior generation SF-1500 controllers in the SLP-300 first generation PCIe card; this time around they're using the SF-2500 family. While the controller itself has improved, there's also the added engineering benefit now that LSI has acquired SandForce. While the results may be more subtle, the benefits are there nonetheless and include improved support for the drive via firmware updates and generally a more tightly integrated unit.

While stability and consistent performance across operating systems are important, those features just open the door. Performance is key and the Nytro WarpDrive doesn't disappoint. At the top end, the cards deliver sequential 4K IOPS of 238,000 read and 133,000 write, along with sequential 8K IOPS of 189,000 read and 137,000 write. Latency is the other just as important performance spec; the Nytro WarpDrive posts latency as low as 50 microseconds.

In this review we apply our full suite of enterprise benchmarks, across both Windows and Linux, with a robust set of comparables, including the prior generation LSI card and other leading application accelerators. Per our usual depth all of our detailed performance charts and content is delivered on a single page to make consumption of these data points as easy as possible.

LSI Nytro WarpDrive Specifications

  • Single Level Cell (SLC)
    • 200GB Nytro WarpDrive WLP4-200
      • Sequential IOPS (4K) - 238,000 Read, 133,000 Write
      • Sequential Read and Write IOPS (8K) - 189,000 Read, 137,000 Write
      • Bandwidth (256K) - 2.0GB/s Read, 1.7GB/s Write
    • 400GB Nytro WarpDrive WLP4-400
      • Sequential IOPS (4K) - 238,000 Read, 133,000 Write
      • Sequential Read and Write IOPS (8K) - 189,000 Read, 137,000 Write
      • Bandwidth (256K) - 2.0GB/s Read, 1.7GB/s Write
  • Enterprise Multi Level Cell (eMLC)
    • 400GB Nytro WarpDrive BLP4-400
      • Sequential IOPS (4K) - 218,000 Read, 75,000 Write
      • Sequential Read and Write IOPS (8K) - 183,000 Read, 118,000 Write
      • Bandwidth (256K) - 2.0GB/s Read, 1.0GB/s Write
    • 800GB Nytro WarpDrive BLP4-800
      • Sequential IOPS (4K) - 218,000 Read, 75,000 Write
      • Sequential Read and Write IOPS (8K) - 183,000 Read, 118,000 Write
      • Bandwidth (256K) - 2.0GB/s Read, 1.0GB/s Write
    • 1600GB Nytro WarpDrive BLP4-1600
      • Sequential IOPS (4K) - 218,000 Read, 75,000 Write
      • Sequential Read and Write IOPS (8K) - 183,000 Read, 118,000 Write
      • Bandwidth (256K) - 2.0GB/s Read, 1.0GB/s Write
  • Average Latency < 50 microseconds
  • Interface - x8 PCI Express 2.0
  • Power Consumption - <25 watts
  • Form Factor - Low Profile (half-length, MD2)
  • Environmentals Operational at 0 to 45C
  • OS Compatiblity
    • Microsoft: Windows XP, Vista, 2003, 7; Windows Server 2003 SP2, 2008 SP2, 2008 R2 SP1
    • Linux: CentOS 6; RHEL 5.4, 5.5, 5.6, 5.7, 6.0, 6.1; SLES: 10SP1, 10SP2, 10SP4, 11SP1; OEL 5.6, 6.0
    • UNIX: FreeBSD 7.2, 7.4, 8.1, 8.2; Solaris 10U10, 11 (x86 & SPARC)
    • Hypervisors: VMware 4.0 U2, 4.1 U1, 5.0
  • End of Life Data Retention >6 months SLC, >3 months eMLC
  • Product Health Monitoring Self-Monitoring, Analysis and Reporting Technology (SMART) commands, plus additional SSD monitoring

Build and Design

The LSI Nytro WarpDrive is a Half-Height Half-Length x8 PCI-Express card comprised of four custom form-factor SSDs connected in RAID0 to a main interface board. Being a half-height card, the Nytro WarpDrive is compatibile with more servers by simply swapping the backplane adapter. Shown below is our Lenovo ThinkServer RD240, used in many of our enterprise tests, which supports full-height cards.

Similar to the previous-generation WarpDrive, LSI uses SandForce processors at the heart of the new Nytro WarpDrive. While the previous generation model used six SATA 3.0Gb/s SF-1500 controllers, the Nytro uses four SATA 6.0Gb/s SF-2500 controllers. The Nytro houses two of these SSDs in two sandwiched heatsink "banks" which are connected to the main board with a small ribbon cable. To interface these controllers with the host computer, LSI uses their own SAS2008 PCIe to SAS bridge, which has wide driver support across multiple operating systems.

Unlike the first-generation WarpDrive, these passive heatsinks allow the NAND and SandForce controllers to shed heat into a heatsink first, which then gets passively cooled by airflow in the server chassis. This reduces hot-spots and ensures more stable hardware performance over the life of the product.

A view from above the card shows the tightly sandwiched aluminum plates below, between, and on top of the custom SSDs that power the Nytro WarpDrive. The Nytro also supports legacy HDD indicator lights, for those who want that level of monitoring to be externally visible.

The LSI Nytro WarpDrive is fully PCIe 2.0 x8 power compliant, and only consumes <25 watts of power during its operation. This allows it to operate without any external power attached and gives it more hardware compatibility over devices such as the Fusion-io "Duo" devices that require external power (or support for drawing power over PCIe spec) to operate at full performance.

Each of the four SSDs powering the 200GB SLC LSI Nytro WarpDrive has one SandForce SF-2500 controller, and eight 8GB Toshiba SLC Toggle NAND pieces. This gives each SSD a total capacity of 64GB, which is then over-provisioned 22% to have a usable capacity of 50GB.

Software

To manage their Nytro WarpDrive products, LSI gives customers the CLI Nytro WarpDrive Management Utility. The management utility allows users to update the firmware, monitor the drive's health, as well as format the WarpDrive to difference capacities by adjusting the level of over-provisioning. Multiple versions of the utility are offered depending on the OS that's required, with Windows, Linux, FreeBSD, Solaris, and VMware supported.

The Nytro WarpDrive Management Utility is as basic as they come, giving users just enough information or options to get the job done. With most of the time spent with these cards in production, you won't find many IT guys loading this utility up on a day to day basis, although the amount of information felt lacking compared to what other vendors offer.

From a health monitoring aspect, the LSI management utility really only works to tell you the exact temperature and yes/no response when it comes to figuring out how far into useful life the WarpDrive is. With a percentage reading of Warranty Remaining giving some indication of health, a detailed figure of total bytes written or total bytes read would be much better at letting the user know just how much the card has been used and how much life the future holds for it.

Another feature that the utility offers that wasn't supported by the first-generation WarpDrive, is the ability to change the over-provisioning level of the logical block device. In a stock configuration the our 200GB SLC Nytro WarpDrive had a usable capacity of 186.26GB, while the performance over-provisioning mode dropped that amount to 149.01GB. A third mode of max capacity over-provisioning was also listed, although it wasn't supported on our model.

Nytro WarpDrive Formatting Modes (for 200GB SLC):

  • Performance over-provisioning - 149.01GB
  • Nominal over-provisioning - 186.26GB
  • Max capacity over provisioning - Not supported on our review model

Testing Background and Comparables

When it comes to testing enterprise hardware, the environment is just as important as the testing processes used to evaluate it. At StorageReview we offer the same hardware and infrastructure found in many datacenters where the devices we test would ultimately be destined for. This includes enterprise servers as well as proper infrastructure equipment like networking, rack space, power conditioning/monitoring, and same-class comparable hardware to properly evaluate how a device performs. None of our reviews are paid for or controlled by the manufacturer of the equipment we are testing; with relevant comparables picked at our discretion from products we have in our lab.

StorageReview Enterprise Testing Platform:

Lenovo ThinkServer RD240

  • 2 x Intel Xeon X5650 (2.66GHz, 12MB Cache)
  • Windows Server 2008 Standard Edition R2 SP1 64-Bit and CentOS 6.2 64-Bit
  • Intel 5500+ ICH10R Chipset
  • Memory - 8GB (2 x 4GB) 1333Mhz DDR3 Registered RDIMMs

Review Comparables:

640GB Fusion-io ioDrive Duo

  • Released: 1H2009
  • NAND Type: MLC
  • Controller: 2 x Proprietary
  • Device Visibility: JBOD, software RAID depending on OS
  • Fusion-io VSL Windows: 3.1.1
  • Fusion-io VSL Linux 3.1.1

200GB LSI Nytro WarpDrive WLP4-200

  • Released: 1H2012
  • NAND Type: SLC
  • Controller: 4 x LSI SandForce SF-2500 through LSI SAS2008 PCIe to SAS Bridge
  • Device Visiblity: Fixed Hardware RAID0
  • LSI Windows: 2.10.51.0
  • LSI Linux: Native CentOS 6.2 driver

300GB LSI WarpDrive SLP-300

  • Released: 1H2010
  • NAND Type: SLC
  • Controller: 6 x LSI SandForce SF-1500 through LSI SAS2008 PCIe to SAS Bridge
  • Device Visiblity: Fixed Hardware RAID0
  • LSI Windows: 2.10.43.00
  • LSI Linus: Native CentOS 6.2 driver

1.6TB OCZ Z-Drive R4

  • Released: 2H2011
  • NAND Type: MLC
  • Controller: 8 x LSI SandForce SF-2200 through custom OCZ VCA PCIe to SAS Bridge
  • Device Visibility: Fixed Hardware RAID0
  • OCZ Windows Driver: 1.3.6.17083
  • OCZ Linux Driver: 1.0.0.1480

Enterprise Synthetic Workload Analysis (Stock Settings)

The way we look at PCIe storage solutions dives deeper than just looking at traditional burst or steady-state performance. When looking at averaged performance over a long period of time, you lose sight of the details behind how the device performs over that entire period. Since flash performance varies greatly as time goes on, our new benchmarking process analyzes the performance in areas including total throughput, average latency, peak latency, and standard deviation over the entire preconditioning phase of each device. With high-end enterprise products, latency is often more important than throughput. For this reason we go to great lengths to show the full performance characteristics of each device we put through our Enterprise Test Lab.

We have also added performance comparisons to show how each device performs under a different driver set across both Windows and Linux operating systems. For Windows, we use the latest drivers at the time of original review, which each device is then tested under a 64-bit Windows Server 2008 R2 environment. For Linux, we use 64-bit CentOS 6.2 environment, which each Enterprise PCIe Application Accelerator supports. Our main goal with this testing is to show how OS performance differs, since having an operating system listed as compatible on a product sheet doesn't always mean the performance across them is equal.

All devices tested go under the same testing policy from start to finish. Currently, for each individual workload, devices are secure erased using the tools supplied by the vendor, preconditioned into steady-state with the identical workload the device will be tested with under heavy load of 16 threads with an outstanding queue of 16 per thread, and then tested in set intervals in multiple thread/queue depth profiles to show performance under light and heavy usage. For tests with 100% read activity, preconditioning is with the same workload, although flipped to 100% write.

Preconditioning and Primary Steady-State Tests:

  • Throughput (Read+Write IOPS Aggregate)
  • Average Latency (Read+Write Latency Averaged Together)
  • Max Latency (Peak Read or Write Latency)
  • Latency Standard Deviation (Read+Write Standard Deviation Averaged Together)

At this time Enterprise Synthetic Workload Analysis includes four common profiles, which can attempt to reflect real-world activity. These were picked to have some similarity with our past benchmarks, as well as a common ground for comparing against widely published values such as max 4K read and write speed, as well as 8K 70/30 commonly used for enterprise drives. We also included two legacy mixed workloads, including the traditional File Server and Webserver offering a wide mix of transfer sizes. These last two will be phased out with application benchmarks in those categories as those are introduced on our site, and replaced with new synthetic workloads.

  • 4K
    • 100% Read or 100% Write
    • 100% 4K
  • 8K 70/30
    • 70% Read, 30% Write
  • File Server
    • 80% Read, 20% Write
    • 10% 512b, 5% 1k, 5% 2k, 60% 4k, 2% 8k, 4% 16k, 4% 32k, 10% 64k
  • Webserver
    • 100% Read
    • 22% 512b, 15% 1k, 8% 2k, 23% 4k, 15% 8k, 2% 16k, 6% 32k, 7% 64k, 1% 128k, 1% 512k

Looking at 100% 4K write activity under a heavy load of 16 threads and 16 queue over a 6 hour period, we found that the LSI Nytro WarpDrive offered slower but very consistent throughput compared to the other PCIe Application Accelerators. The Nytro WarpDrive started at roughly 33,000 IOPS 4K write, and leveled off at 30,000 IOPS at the end of this preconditioning phase. This compared to the first-generation WarpDrive that peaked at 130,000-180,000 IOPS and leveled off at 35,000 IOPS.

Average latency during the preconditioning phase quickly settled in at about 8.5ms, whereas the first-generation WarpDrive started around 2ms before tapering off to 7.2ms as it reached steady-state.

When it comes to max latency there is almost no doubt that SLC is king in terms of spikes that are few and far between. The new Nytro WarpDrive had the lowest consistent max latency in Windows, which increased under its CentOS driver, but still remained very respectable.

Looking at the latency standard deviation, under Windows the Nytro WarpDrive offered some of the most consistent latency. matched by only the first-generation WarpDrive. In CentOS though, the standard deviation was more than double, at over 20ms versus 7.2ms in Windows.

After the PCIe Application Accelerators went through their 4K write preconditioning process, we sampled their performance over a longer interval. In Windows the LSI Nytro WarpDrive measured 161,170 IOPS read and 29,946 IOPS write, whereas its Linux performance measured 97,333 IOPS read and 29,788 IOPS write. Read performance in Windows and Linux was higher than the previous-generation WarpDrive, although 4K steady-state performance dropped 5,000 IOPS.

The LSI Nytro WarpDrive offered the second to lowest 4K read latency, coming in behind the OCZ Z-Drive R4 that uses 8 SF-2200 controllers versus the Nytro WarpDrive's four SF-2500 controllers. Write latency was the slowest in the pack measuring 8.54ms in Windows and 8.591ms in Linux (not counting the OCZ Z-Drive R4 that was not even in the same ballpark).

Looking at the highest peak latency over the duration of our final 4K read and write testing intervals, the LSI Nytro WarpDrive offered the lowest 4K write latency in the pack with 51ms in Windows. Its Linux performance measured 486ms, as well as a high 4K read blip in Windows measuring 1,002ms, but overall it ranked well versus our other comparables.

While peak latency will only show the single response time over an entire test, showing standard deviation gives the whole picture as to how well the drive behaves over the entire test. The Nytro WarpDrive came in towards the middle of the pack, with read latency standard deviation roughly twice that of the first-generation WarpDrive. Standard deviation in the write test was only slightly higher in Windows, but fell behind in Linux. In Windows, its write performance still came in towards the top of the pack, above the Fusion ioDrive Duo and OCZ Z-Drive R4.

The next preconditioning test works with a more realistic read/write workload spread, versus the 100% write activity in our 4K test. Here, we have a 70% read and 30% write mix of 8K transfers. Looking at our 8K 70/30 mixed workload under a heavy load of 16 threads and 16 queue over a 6 hour period the Nytro WarpDrive quickly leveled off at 87,000 IOPS, finishing as the fastest drive in the group in Windows. The Nytro WarpDrive levled off at around 70,000 IOPS in Linux, although that was still the fastest Linux performance in the group as well.

In our 8K 70/30 16T/16Q workload, the LSI Nytro WarpDrive offered by far the most consistent average latency, staying level at 2.9ms throughout our Windows test, and 3.6ms in Linux.

Similar to the behavior we measured in our 4K write preconditioning test, the SLC-based Nytro WarpDrive also offered extremely low peak latency over the duration of the 8K 70/30 preconditioning process. Its performance in Windows hovered around 25ms, while its Linux performance floated higher around 200ms.

While peak latency over small intervals gives you an idea of how a device is performing in a test, looking at its standard deviation shows you closely those peaks were grouped. The Nytro WarpDrive in Windows offered the lowest standard deviation in the group, measuring almost half of the first-generation WarpDrive. In Linux the standard deviation was much higher, by almost a factor of four, although that still ranked middle/top of the pack.

Compared to the fixed 16 thread, 16 queue max workload we performed in the 100% 4K write test, our mixed workload profiles scale the performance across a wide range of thread/queue combinations. In these tests we span our workload intensity from 2 threads and 2 queue up to 16 threads and 16 queue. The LSI Nytro WarpDrive was able to offer substantially higher performance at lower thread count workloads with a queue depth between 4 to 16. This advantage played out largely over the entire test looking at its Windows performance, although in Linux that advantage was capped to roughly 70,000 IOPS where the R4 (in Windows) was able to beat it in some areas.

On the other half of through throughput equation, the LSI Nytro WarpDrive consistently offered some of the lowest latency in our 8K 70/30 tests. In Windows, the Nytro WarpDrive came in at the top of the pack, while the Z-Drive R4 in Windows beat the Nytro's performance in Linux.

In our 8K 70/30 test the SLC-based LSI Nytro WarpDrive in Windows had more 1,000ms+ peak latency spikes, whereas the Linux driver kept that suppressed until the higher 16-thread workloads. While this behavior didn't differ from the Fusion ioDrive Duo or Z-Drive R4, it had more high latency spikes than the first-generation WarpDrive in Windows, especially when under more demanding loads.

While the occasional high spikes might look discouraging, the full latency picture can be seen when looking at the latency standard deviation. In our 8K 70/30 workload, the LSI Nytro WarpDrive offered the lowest standard deviation throughout the bulk of our 8K tests,

The File Server workload represents a larger transfer-size spectrum hitting each particular device, so instead of settling in for a static 4k or 8k workload, the drive must cope with requests ranging from 512b to 64K. In our File Server throughput test, the OCZ Z-Drive R4 had a commanding lead in both burst and as it neared steady-state. The LSI Nytro WarpDrive started off towards the bottom of the pack between 39-46,000 IOPS, but remained their over the duration of the test, while the Fusion ioDrive Duo and first-generation WarpDrive slipped below it.

Latency in our File Server workload followed a similar path on the LSI Nytro WarpDrive as it did in the throughput section, where it started off relatively high in terms of its burst capabilities, but stayed there over the duration of the test. This steady as a rock performance allowed it to come in towards the top of the pack, while the others eventually slowed down over the endurance section of the preconditioning phase.

With its SLC NAND configuration, our 200GB Nytro WarpDrive remained rather calm over the duration of our File Server preconditioning test, offering some of the lowest latency spikes out of the bunch. In this section the first-generation WarpDrive offered similar performance, as did the Fusion ioDrive Duo, although the later had many spikes into the 1,000ms range.

The LSI Nytro WarpDrive easily came out on top when looking at the latency standard deviation in the File Server preconditioning test. With a single spike, it was nearly flat at 2ms for the duration of this 6 hour process, and proved to be more consistent than the first-generation WarpDrive.

Once our preconditioning process finished under a high 16T/16Q load, we looked at File Server performance across a wide range of activity levels. Similar to the Nytro's performance in our 8K 70/30 workload, it was able to offer the highest performance at low thread and queue depth levels. This lead was taken over by the OCZ Z-Drive R4 in the File Server workload at levels above 4T/8Q, where the R4's eight controller count helped it stretch its legs further. Over the remaining portion of our throughput test, the Nytro WarpDrive came in second under the Z-Drive R4 in Windows.

With high throughput also comes low average latency, where the LSI Nytro WarpDrive was able to very good response times at lower queue depths, measuring as low as 0.366ms at 2T/2Q. It wasn't the quickest though, as the ioDrive Duo held the top spot, measuring 0.248ms in the same portion of the test. As the loads increased though, the Nytro WarpDrive came in just under the OCZ Z-Drive R4, utilizing half the controllers.

Comparing the File Server workload max latency between the OCZ Z-Drive R4 and the LSI Nytro WarpDrive, it's easy to see what the advantage of SLC NAND is. Over the duration of the different test loads, the SLC-based Nytro WarpDrive and first-generation WarpDrive both offered some of the lowest peak response times and fewest overall peaks.

Our latency standard deviation analysis reiterated that the Nytro WarpDrive was able to come in with class-leading performance over the duration of our File Server workload. The one area where responsiveness started to slip was under at 16T/16Q workload, where the Nytro WarpDrive in Linux had more variation in its latency.

Our last workload is rather unique in the way we analyze the preconditioning phase of the test compared to the main output. As a workload designed with 100% read activity, it's difficult to show each device's true read performance without a proper preconditioning step. To keep the conditioning workload the same as the testing workload, we inverted the pattern to be 100% write. For this reason the preconditioning charts are much more dramatic than the final workload numbers.

While it didn't turn into an example of slow and steady wins the race, the Nytro WarpDrive had the lowest burst throughput (not counting the R4's problematic Linux driver's performance), but as the other devices slowed towards the end of the preconditioning process, the Nytro WarpDrive came in second place under the R4 in Windows. This put it ahead of both the ioDrive Duo and first-generation WarpDrive under our heavy 16T/16Q inverted Web Server workload.

Average latency of the Nytro WarpDrive in our Web Server preconditioning test stayed flat at 20.9ms over the duration of the test. This compared to 31ms from the first-generation WarpDrive towards the second half of the test.

In terms of most responsive PCIe Application Accelerator, the LSI Nytro WarpDrive came in on top with its performance in Windows during our Web Server Preconditioning test. It kept its peak response times under 120ms in Windows, and right above 500ms in Linux.

With barely a spike in our Web Server preconditioning test, the LSI Nytro WarpDrive impressed again with its incredibly low latency standard deviation. In Windows, it offered the most consistent performance, coming out on top of the first-generation WarpDrive. Its performance in Linux didn't fare as well, but still came in towards the middle of the pack.

Switching back to a 100% read Web Server workload after the preconditioning process, the OCZ Z-Drive R4 offered the highest performance in Windows, but only after an effective queue depth of 32. Before that the Nytro WarpDrive was able to come out on top with lower thread counts over a queue depth of 4. The leader in the low thread/low queue depth arena was still the Fusion ioDrive Duo.

The LSI Nytro WarpDrive was able to offer impressive low-latency in our Web Server workload, measuring as low as 0.267ms in Linux with a 2T/2Q load. Its highest average response time was 4.5ms in Linux with a 16T/16Q load. Overall it performed very well, bested by only the OCZ Z-Drive R4 in Windows under higher effective queue depths.

All of the PCIe Application Accelerators suffered from some high latency spikes in our Web Server test, with minimal differences between OS, controller or NAND type. Overall Linux was LSI's strong suit for both the Nytro WarpDrive and first-generation WarpDrive, having fewer latency spikes versus the performance in Windows.

While the peak latency performance may seem problematic, what really matters is how the device performs over the entire duration of the test. This is where latency standard deviation comes in to play, measuring how consistent the latency was overall. While the LSI Nytro WarpDrive in Windows had more frequent spikes compared to its Linux performance, it had a lower standard deviation in Windows under higher effective queue depths.

Conclusion

The LSI Nytro WarpDrive WLP4-200 represents a solid step forward for LSI's application acceleration line. It's generally quicker in most areas than the prior generation SLP-300, thanks to the updated SandForce SF-2500 controller and improved firmware used this time around. Structurally it's simpler as well, dropping from six drives in RAID0 to four. LSI has also added a bunch of capacity and NAND options for the Nytro WarpDrive line, giving buyers a range of options from 200GB in SLC up to 1.6TB in eMLC. Overall the offering is more complete and well-rounded, offering flexibillty which should increase the market adoption for the Nytro WarpDrive family at large. 

A big selling point for LSI is the compatibility of their products on a hardware and OS level. We noted strong performance from the Nytro WarpDrive in both our Windows and Linux tests. The Windows driverset was definitely more polished, offering much higher performance in some areas. While the ioDrive Duo also showed very good multi-OS support, the same can not be said about OCZ's Z-Drive R4, which had a gigantic gap in performance between their Windows and Linux drivers.

When it comes to management, LSI offers software tools to check the health and handle basic commands for most major operating systems. Their CLI WarpDrive Management Utility is basic, but still gets the job done when it comes to formatting or over-provisioning the drive. The software suite is certainly a bit spartan, but even these tools are appreciated as some in the PCIe storage space don't offer much of anything when it comes to drive management. 

The most surprising aspect of the LSI Nytro WarpDrive is its behavior in our enterprise workloads. Compared to other PCIe Application Accelerators we've tested, its burst performance wasn't the most impressive, but the fact that it remained rock solid over the duration of our tests was. What it lacked in speed off the line, it more than made up for in consistent latency with incredibly low standard deviation under load. For enterprise applications that demand a narrow window of acceptable response times under load, low max latency and standard deviation seperate the men from the boys. It's also important to remember that SandForce-based drives have compression benefits that aren't highlighted in this type of workload testing. For this reason and to show an even more complete profile of enterprise drive performance, StorageReview is currently building out a robust set of application-level benchmarks that may show further differences between enterprise storage products. 

Pros

  • Increased performance while reducing controller count
  • Industry leading host system compatibility
  • More NAND and capacity options than previous-generation WarpDrive
  • Incredibly consistent latency under stress

Cons

  • Limited software tools for drive management
  • Weaker burst performance (excellent steady-state performance)

Bottom Line

The LSI Nytro WarpDrive WLP4-200 is a solid PCIe application accelerator and will win over enterprise customers for its excellent steady state performance, consistent performance over a variety of uses, and class-leading compatibility with host systems. LSI did a good job with the Nytro WarpDrive from hardware design to smooth operation, with our main complaints being around drive management tools. While it doesn't burst out of the gate as fast as others, that's usually not terribly important to the enterprise and there's something to be said for a drive that works well out of the box, and continues to operate well, in just about any operating system. 

LSI Application Acceleration Products

Discuss This Review