October 1st, 2013 by Brian Beeler
Violin and Kaminario Square off Over 2 Million IOPS - But it Doesn't Matter
The last two weeks has seen quite a bit of nonsense in the flash-array space. Some of it revolves around good old-fashioned competition in a space that receives a ton of media attention, but most of it revolves around claims of who’s the fastest in a given test. The main event has been a slap-fest between Violin and Kaminario about IOPS, application testing and who’s right when it comes to flash storage architecture.
2 Million IOPS
As much as the industry talks about moving away from IOPS figures that are essentially meaningless in the working world of storage, IOPS are still the easiest single score for many people to understand, and they’re usually gigantic figures that make storage systems look fast, regardless of expense to get there or actual quickness in performing application specific tasks.
Kaminario is quick to assert that the Violin quote of 2 million IOPS is nothing special, Kaminario showed 2MM IOPS at VMworld last year with their MLC K2 storage system. Violin, who’s booth was stationed nearby, was quick to argue against their fundamental design and pushed the focus onto storage benchmarks like VMmark, Violin has touted themselves as the best thing for VMs for a long time.
As a result it’s a bit odd that Violin feels compelled to see who can shout loudest with the IOPS argument. Recently, in a joint deal with Fibre Channel players, Violin paid StorageSwiss to “audit a test lab” where they used workload generator FIO to crank out “results [that] ranged from 2.3 million IOPS on 100% random read tests to 1.7 million IOPS with a 70/30 read/write mix (using 8K packet sizes).” For Kaminario’s part, they quickly pumped out IOMeter results that were roughly the same, at what they contend is a much lower price point and better architected solution.
Do IOPS Matter?
The short answer is yes…it’s also no. IOPS on their own aren’t entirely useful, but they do at least give us a way to compare results in an environment with expected results from the storage vendor. They also give us some degree of measurement to compare similar systems in the same environment. Of course the 2MM IOPS battle highlighted above doesn’t do this as each vendor is either running their own numbers or paying a third party to audit the numbers. Either way results should be taken with an inquisitive eye. Achieving the actual IO throughput isn't all that difficult either, we posted 2MM 4K IOPS in a single 2U server with consumer-grade SSDs, so it does not in fact "take a village" to get such heady results.
The results are really all about what applications can do with the storage at hand, in this case high-end flash arrays. To this end StorageReview has been migrating away from the workload generated results, using them only in limited fashion, instead deciding to focus on the development of infrastructures that measure application performance in a more meaningful way (VMmark, MarkMail, MarkLogic NoSQL, MySQL OLTP). Going this route we’re able to provide more meaningful results within a stable and reproducible architecture. We think these results are more accurate and meaningful to our readership than most others.
Many storage vendors agree; current projects are in the works with Dell, EchoStreams, Fusion-io, HP, Huawei, Lenovo, LSI, Micron, NetApp, SanDisk, Virident and many others who are working to provide flash storage solutions to the enterprise. Some fighting in this space have made offers to get review units in our lab, only to rescind once we started talking testing protocol and the fact that our results get published without the ability to pick and choose which results get posted. That lack of control is terrifying for these guys, which is an interesting point to ponder when making a buying decision - who's open to independent evaluation and who's not.