Reviews Leaderboard Database Reference Search StorageReview Discussion Reliability Survey Search About StorageReview.com Contents

Second Anniversary!
3/13/2000 - SR`s IOMeter Tests
3/14/2000 - The New Database


Operating Systems and Benchmarks - Part 5
  March 13, 2000 Author: Eugene Ra  

StorageReview.com's IOMeter Tests

We're attempting to assess the performance of various hard disks in both Workstation and File Server patterns. The case for a File Server is easy enough- IOMeter comes bundled with an example access pattern. Assessing a workstation's access pattern, however, is more difficult. Amazingly, both Intel themselves as well as the drive manufacturers that we approached declined to offer specifics that would guide us along to a decent workstation pattern.

Generally speaking, it's evident that random access dominates typical workstation usage (and is thus the principal reason ThreadMark has been jettisoned). Even WinBench corroborates when pressed with the issue. The question is, how much randomness is sufficient?

Though the loading of executables, DLLs, and other libraries are at first a sequential process, subsequent accesses are random in nature. Though the files themselves might be relatively large, parts of them are constantly being sent to and retrieved from the swapfile. Swapfile accesses, terribly fragmented in nature, are quite random. Executables call other necessary files such as images, sounds, etc. These files, though they may represent large sequential accesses, consist of a very small percentage of access when compared to the constant swapping that occurs with most system files. Combined with the natural fragmentation that plagues the disks of all but the most dedicated defragmenters, these factors clearly indicate that erring on the side of randomness would be preferred.

Also in question is the balance of reads vs. writes. On most systems, the vast majority of I/Os are dominated by reads. Writes occur with the one-time installation of applications, the writing of data files, and most importantly, swapfile writes. Only in the case of huge, real-time writing (such as A/V recording) do write consume the majority of operations.

StorageReview.com has standardized on the following Access Patterns:

Access Patterns
% of Access Specification Transfer Size Request % Reads % Random
File Server Access Pattern (as defined by Intel)
10% 0.5 KB 80% 100%
5% 1 KB 80% 100%
5% 2 KB 80% 100%
60% 4 KB 80% 100%
2% 8 KB 80% 100%
4% 16 KB 80% 100%
4% 32 KB 80% 100%
10% 64 KB 80% 100%
Workstation Access Pattern (as defined by StorageReview.com)
100% 8 KB 80% 80%
Database Access Pattern (as defined by Intel/StorageReview.com)
100% 8 KB 67% 100%

The "Workstation" Access Pattern will no doubt be quite controversial among our readership. Let's take a closer look at what 8k blocks combined with 80% randomness yields:

 Workstation Access Pattern - Analysis 
% of total I/Os Transfer Size
80% 8 KB
16% 16 KB
3.2% 24 KB
0.64% 32 KB
0.128% 40 KB
0.0256% 48 KB
0.00512% 56 KB
0.00128% 64 KB or greater

Basically, this access pattern allows for predominately random access in an 8k cluster environment along with occasional sequential occurrences that may escape the swapfile and/or natural fragmentation. The pattern also assumes predominately read-oriented use with the vast majority of writes consisting of swapfile hits. It will be the pattern that we refer to when mentioning "Workstation IOMeter Performance" in a review.

The "Database" Access Pattern is named such because IOMeter's default access pattern, identical excepting only its 2k block size, is described as "representing a typical database workload." 8k accesses imply huge records, but in reality the difference between transferring 8k vs 2k at 20 MB+/sec is negligible compared to the time it takes to move a drive's actuator into place and the time it takes for the proper data to spin under the read/write heads. This pattern, despite the arbitrary name we've assigned to it, may in the eyes of some folks be more representative of a typical workstation due to it's even higher bias towards random accesses.


# of Outstanding I/Os- IOMeter Loads

Tweaking the # of simultaneous outstanding requests affects the aggregate load the tested drive is under. A depth of 1 is an extremely linear load which, combined with a 100% random test such as our "Database" pattern, can basically serve as a test measuring average random access times. A depth of 1, however, is not representative of any real kind of access. A depth of 4, on the other hand, can represent the most rudimentary of operations such as, say, loading up Windows calculator. Using Win2k's perfmon.exe, we've noted in our usage spikes of 30-50 I/Os when loading applications. Spikes of greater than 100 I/Os occur in heavy disk access such as defragmenting a drive.

As a result, we've decided to subject each of our three access patterns to the following 5 loads:

Loads
Linear 1 Outstanding I/O
Very Light 4 Outstanding I/Os
Light 16 Outstanding I/Os
Moderate 64 Outstanding I/Os
Heavy 256 Outstanding I/Os


Other Miscellaneous Settings

Unless otherwise specified, an IOMeter trial runs until the user manually halts the operation. We've standardized on a 10 minute timer for each of our 15 tests (3 access patterns, each under 5 separate loads). We've implemented a 30 second "ramp-up" time where the patterns are run without counting towards the final scores to eliminate any idiosyncrasies that may occur within the first few seconds of testing.

We've found through extensive testing that IOMeter is a relatively impervious benchmark when it comes to external system variables. Aside from the obvious case of CPU utilization-related fields, IOMeter is an amazingly tolerant benchmark (thanks to its low-level nature) that yields numbers somewhat comparable between systems. StorageReview.com, of course, will officially treat the matter with white gloves- our testbed will always always remain consistent! We'll cringe less, however, when users compare their IOMeter results to ours. Another nice side effect of IOMeter's consistency is that no rebooting (and, since we're testing using the recommended method of accesses to an unpartitioned physical disk, no formatting!) is necessary.

StorageReview.com is using a beta version of IOMeter, build 1999.10.20, that includes extra facilities to run multiple trials. Combined with the fact that reboots are unnecessary, we've batched up a single test that runs all 15 trials in a neat, tidy, and unattended 2 hours, 37 minutes and 30 seconds. Results from this version are comparable with the current release.

IOMeter delivers multitudes of different test results. Many are different ways of saying the same thing. Others apply only to multiple-CPU/networked setups. Thus, we've chosen to distill our reported results to a select few:

Total I/Os Per Second- The most significant result, the total I/O count simply measures the average number of requests completed in a second. A request, of course, consists of actuator and rotational positioning followed by a block read or write- 8k in the Workstation/Database instances and anywhere from 0.5k to 64k in the case of the File Server Pattern.

Total MBs Per Second- Really just an extrapolation of Total I/Os per second, this entry measures the amount of data transferred. In the case of the Workstation and Database patterns, this # is simply the Total I/Os Per Second figure, multiplied by 8k per I/O, divided by 1024 (the quotient of a megabyte and a kilobyte). The File Server model is a tiny bit more complex due to its variable transfer size per I/O, but we're sure one gets the idea.

Average I/O Response Time- At the linear (1 outstanding I/O) level, this figure is simply another way to express Total I/Os Per Second. The Total I/Os figure, simply put, is 1000 milliseconds divided by the Average I/O Response Time. As the # of outstanding I/Os increases, however, things get a bit more complex. Average I/O Response Time increases, but not in a fashion linear to the increase in outstanding I/Os. This results from drive firmware optimizations, interface/bus optimizations, and optimizations in the disk access routines in Win2k itself.

CPU Utilization- In most cases, this figure by itself won't appear in our database. It's simply the raw percentage of CPU cycles used in completing requests. It's a very low figure for a 700 MHz processor. Then again, it's a low figure for a 266 MHz processor. At any rate, more useful is...

I/Os per % CPU Utilization- This is IOMeter's "CPU Effectiveness" result given a more intuitive moniker. It simply takes the Total I/Os Per Second result and divides it by CPU Utilization (which in most cases is less than 1%, hence the increase in the value). Comparisons between drives utilizing this figure are more valid than comparisons using the straight CPU Utilization figure.

When considered in the light of 15 separate cases (the three access patterns and five load levels), it's clear that we're going to be faced with a tremendous amount of data. To make the lives of our readers easier, we've created a database that attempts to address this pile of data in an intuitive and user-friendly fashion.

 The New Database Unveiled...


HOME | ARTICLES | LEADERBOARD | PERFORMANCE DATABASE | REFERENCE GUIDE
COMMUNITY | RELIABILITY SURVEY | SUPPORT SR! | ABOUT SR |

Copyright © 1998-2005 StorageReview.com, Inc. All rights reserved.
Write: Webmaster