What makes a hard drive fast? This is the same rhetorical question we posed opening up our investigation into performance differences between 512k buffer and 1024k buffer versions of the Maxtor DiamondMax Plus 5120. Here at StorageReview.com, we believe that there are four principal factors involved in hard disk performance.
Some argue that the least important (again, reference our look into buffer differences) and certainly the hardest to quantify is the buffer/firmware subsystem. These two factors are inextricably locked together. Is this subsystem truly the "least important?" On the one hand, we have the results from our look at the DiamondMax Plus 5120. On the other, however, there are times when a drive which boasts superior physical specifications seems to be slower than its competitors in both benchmarks and regular use. Quantifiable only by buffer size, the buffer/firmware subsystem always remains nebulous.
Average seek time (the definition of which seems to be under fiery debate in usenet newsgroups at the time of this writing) used to be by far the most quoted hard disk spec. We believe that there's been somewhat of a "backlash" towards the idea that seek time is important. Back in the old days of PC hard disks that took 65ms-110ms of time to move a drive's actuator to the cylinder where requested data resided, seek time was vitally important. Seek time dwarfed a 3600rpm (the standard speed) disk's 8ms rotational latency, making seek and access times virtually synonymous. These days, we're experiencing quoted seek times of under five milliseconds. Though rotational latency hasn't experienced nearly the same cuts, and has thus become a significant factor in hard disk performance, seek time still remains quite important.
Sequential transfer rates (STR) have been pegged by many as the most important factor in hard disk performance. STR, the natural compliment to seek time, has enjoyed a renaissance of sorts. The maximum STR of a hard disk interface, after all, has been endlessly touted with the release of the ATA-33 and now ATA-66 standards. Basically stated, STR measures how long it takes to read the requested data once the drive's actuator has been moved into place. STR is a factor of both the amount of sectors stuffed in a track and the speed at which the disk spins. As areal densities increase, linear data densities (the amount of 512 byte sectors in a track) also tend to increase (for rough estimation, the assumption that linear density increases at the square root of areal density increase is sufficient). In StorageReview.com's measurements, STR of the beginning and end of a disk are displayed in WinBench 99's "Disk Read/Transfer Rate" tests. Today, we're witnessing STRs in excess of 30 megabytes a second.
Today's most quoted spec, indeed the one that differentiates hard disk "classes," is spindle speed. The amount of rotations a drive's platters make in a minute, spindle speed in itself doesn't mean much. Directly proportional to spindle speed, however, is a drive's rotational latency. Once the actuators are in place at the appropriate track ("seek time"), the transfer of the data ("STR") must wait until the requested data passes under the disk's heads. As mentioned before, latency, measured in milliseconds, significantly affects a drive's total access time (access time = seek time + rotational latency + overhead). Given a set linear data density, a 7200rpm drive will sport STRs 1/3rd higher than a 5400rpm disk. Directly affecting two performance parameters (both access time and STR), spindle speed is indeed a very important spec.
Nonetheless, STR enjoys quite a bit of the spotlight these days. A big reason is the recent transition of drives from the ATA-33 to ATA-66 interface. Though savvy users realize ATA-66 makes no difference with drives transferring data at less than 25 MB/sec, we're rapidly approaching a point where drives will be making swipes at ATA-33's limits. Various overheads prevent access to all 33 MB/sec of ATA-33's bandwidth. Indeed, some readers believe that today's drives push the limit of ATA-33. The Barracuda ATA boasts transfer rates of over 28 MB/sec, precariously close to ATA-33 limits.
For sake of argument, let's assume that ATA-33 tops out at about 30 megabytes per second. Assume that a new drive hit the market which, in its outer tracks, could transfer 35 megs a second. How much performance would be lost running the drive in ATA-33 mode instead of ATA-66? Would it be significant enough to warrant the purchase of an ATA-66 interface? If not, what about a drive that could top 40 MB/sec? How can we test STR's affects on total disk performance?
As it turns out, there's an easy way. Our testbed's Adaptec AHA-2940U2W allows the user to limit a device's maximum transfer rate. Settings of 5 MB/sec and 10 MB/sec are readily available. Western Digital's 10,000rpm drive, the WDE18310, manages to sustain its maximum transfer rate of 27.9 MB/sec through its outer 2.8 gigs of track. WinBench 99's tests run in less than 3 gigs of space. The results that currently reside in the StorageReview.com database thus taxed only the outermost zone of the WD drive. An interesting test can thus be run by limiting this 28 MB/sec drive through its host adapter to a transfer rate of only 10 MB/sec. Rendering a substantial reduction of 64%, such a test would easily reveal in what areas drives stumble when limited in STR.
We've decided to go ahead and do just that. Results for the WDE18310 unleashed (i.e., with the host adapter set at 80 MB/sec) have been around StorageReview.com for some time. We went on to run our suite of tests on the drive with the adapter limiting operation to 10 MB/sec.
As expected, ZD's WinBench 99 Transfer Rate test results plummeted when the drive was limited to 10 MB/sec operation. Interestingly, Windows 95 transfer tests kept the drive about 96% of the maximum rate (4% apparent overhead) while NT squeezed out 99.5% efficiency. A curious anomaly arises in the access time department. Under Windows 95, access time rises from 8.9ms to 9.2ms. NT results, however, remained static at 8.8 milliseconds. At any rate, overall, the results here in the "no kidding" department. Lower the STR maximum to 10 MB/sec, and, lo and behold, STR measurement through low-level benchmarking drops to about 10 MB/sec! This is simply confirmation that we're running the drive in the mode we desire. Let's take a look at the more telling high-level tests to see how this limitation impacts application performance.
WinBench99's Disk WinMarks indeed turn in interesting results. The Business Disk WinMark 99, an amalgamation of office-type (word processing, spreadsheets, database, browsers, etc) applications, suffered a remarkably small drop in both Windows 95 and Windows NT. Despite a drop in sequential transfer rates of over 60%, the Business WinMark dropped by less than 10%. These applications, however, aren't particularly intensive in reads with large files. The High-End Disk WinMark, however, exhibited more significant differences. The overall weighted score dropped 36%-39% when the drive was operated with a 10 MB/sec limit. Since the High-End WinMark gives individual application breakdowns, it's easy to take a look and see what types of applications suffer most. Web page editing with FrontPage 98, for example, appears to rely little on transfers of large files. In either operating system, the score dropped a mere 3% when STR was limited. On the other hand, Premiere, a digital video editing solution, suffers substantial drops in performance (over 40% in NT).
ThreadMark again displays what's likely a severe over-reliance on STR in determining its aggregate score. The 65% drop imposed by reduction of maximum STR to 10 MB/sec results in 60% performance drops according to ThreadMark. This implies that other factors such as access time make little difference in performance. Not very likely.
So what can we make of these scores? I should mention that personal use of the drive under STR-limited conditions correlates pretty well with WinBench results. The results with business/office applications are particularly surprising. Remember folks, STR was limited to 1/3rd its normal rate in these tests. This translates into a mere 10% loss in office applications. High-End applications were a bit more constrained by a drop in STR, but probably not by an amount that many would expect.
These results present some interesting ramifications in the "Do I need ATA-66" debate. As stated earlier, its likely that no drives exceed ATA-33's maximum transfer rate. Even if there were drives that did, however, these results show that it would take a drive with a 90 MB/sec - 100 MB/sec transfer rate to effect the relative losses found in this experiment. In such a perspective, the performance loss one would experience from running, say, a 40 MB/sec drive on an ATA-33 drive seems trivial. Another example: the performance loss a business applications user would experience running the 28 MB/sec Barracuda ATA on a DMA mode 2 interface (16.6 MB/sec) would be trivial.
In conclusion, while it's true that STR remains vital in many specialized applications, its impact on everyday business use is surprisingly small. These results suggest that all the concerns about interface transfer limits, especially in the ATA landscape, are a bit overblown. The ATA-33 interface found in today's venerable BX chipset motherboards will be able to deliver substantial performance increases with not only the latest of today's hard disks, but also many of the drives of tomorrow. Beyond that, ATA-66 and future iterations will simply carry users that much further.