Just how much performance variation exists between multiple samples of a given drive? This question pops up every so often in the StorageReview.com Discussion Forums as well as in usenet newsgroups. Some say that differences in performance can be significant, while others claim that manufacturers adhere to strict performance tolerances when shipping drives (e.g., seek time must be within 0.05ms of spec for the drive to be considered shippable).
As our readers know, StorageReview.com uses Ziff-Davis' WinBench 99 to measure access time. From access time, seek time can easily be calculated by subtracting the drive's average rotational latency. It is often interesting to compare measured seek times to the figures specified by the manufacturer. When the seek time derived from WinBench's reported access time differs significantly from the manufacturer's claim, there are three possibilities: 1) the manufacturer's specified seek time is either too high or too low, 2) WinBench is wrong, or 3) the particular drive tested, for whatever reason, has an out-of-spec seek time.
How does one decide which case is correct? In order to arrive at an accurate conclusion, multiple samples of the drive in question must be tested. For the most accurate results, the drives should also be of the same capacity. Unfortunately, it is not often that StorageReview.com has a chance to do this; we generally receive only one sample for review.