Reviews Leaderboard Database Reference Search StorageReview Discussion Reliability Survey Search About StorageReview.com Contents

Battle of the Titans: Promise SuperTrak 100 vs. 3Ware Escalade 6400
  February 14, 2001 Author: Terry Baranski  
Promise SuperTrak-100 provided by Promise Technology, Inc.
Special thanks go to Hyper Microsystems* for providing the 3Ware Escalade 6400.

* Remember, mention StorageReview.com when ordering from HyperMicro and receive FREE shipping!


Before reading this review we strongly recommend that you read the StorageReview.com RAID Guide.


Introduction

It would be an understatement to say that the popularity of RAID has increased over the last few years. It wasn't long ago that those wishing to implement RAID had to go with SCSI drives and controllers. The relatively high cost of SCSI limited RAID, for the most part, to the server arena.

Then, in 1997, Promise Technologies introduced the first ever ATA RAID card: the FastTrak (now called the FastTrak-33). The floodgates opened with this introduction... even folks who barely knew what RAID was knew that they wanted it. It had never before been possible to configure a RAID array in such a cost-effective fashion: the FastTrak debuted with a pleasant sub-$200 price tag and the ATA drives, of course, were significantly less expensive than their SCSI counterparts.

As the price of ATA drives plummeted, the popularity of ATA RAID grew. Promise followed up on the FastTrak with the FastTrak-66 and FastTrak-100. Other manufacturers - such as Iwill, AMI, and HighPoint - recognized the exploding ATA market and introduced their own cards accordingly. It wasn't long afterwards that motherboards started to integrate ATA RAID chips from these very companies.


"Low-end" RAID?

Not everything was rosy in the land of ATA RAID, however. The FastTrak series of controllers - as well as offerings from Iwill, AMI, and HighPoint - strove for low cost in an effort to appeal to the desktop market. As a result, it's no surprise that performance issues have been at the forefront of discussion. Some claim that these low-end ATA RAID cards do little for real-world performance, while others insist that significant performance gains are indeed realizable. Furthermore, whether or not these cards are even "true" hardware RAID also continues to be a cause of much debate -- some say that a card must have an on-board RISC processor to be considered a hardware implementation.

With all of the above in mind, the introduction of undeniably "true" hardware controllers - complete with on-board processors, several independent ATA channels, a higher price tag, and, in some cases, on-board cache - was inevitable. Indeed, Promise, 3Ware, and Adaptec have since introduced their own line of hardware-based ATA RAID controllers.

Confused? The StorageReview.com RAID Guide explains all!


The SuperTrak 100...

Promise's SuperTrak line of cards has recently drawn significant interest from the online community. Although introduced in 1998, the SuperTrak languished in obscurity. A year later came the SuperTrak-66, again without much fanfare. Then, last September, Promise announced the 3rd generation SuperTrak-100.

SuperTrak 100 Specifications:

  • PCI (v.2.1) card with onboard CPU and cache memory;
  • Up to 6 Ultra ATA/100 drives (up to 128GB each; six independent and active data channels);
  • 32-bit onboard Intel i960 RD RISC;
  • Supports up to 128MB cache with one 72-pin EDO SIMM (Units ship with 16MB memory included);
  • RAID Levels 5, 4, 3, 1, 0, 01 and JBOD;
  • "Hot spare" capability;
  • Up to 5000 I/Os per second (cache);
  • Up to 1000 I/Os per second (non-cache);
  • Up to 133MB/sec burst data transfers over PCI bus;
  • Stripe size selectable from 1K to 1MB;
  • Automatic failed drive detect and transparent drive rebuild; audible alarms; supports SMART-capable drives for predictive failure analysis; Windows GUI allows viewing, creating, and deleting arrays; access by Internet; alerts users(s) by e-mail on errors;
  • Elevator seek, tagged command queuing, hardware scatter-gather engine, load balancing;
  • Windows NT4.x / 2000 support;
  • Random block storage class I/O platform with HDM and ISM that conforms to I20 spec 1.5;
  • Complete UDMA CRC error-checking support; NVRAM creates write log for data parity coherency;
  • 2-year warranty

The SuperTrak-100 immediately caught enthusiast community's attention, hardly surprising considering the card's specifications. After all, a 6-channel, ATA-100 RAID 0/1/01/3/4/5 card with 16MB of on-board cache and hot-swap support is quite a feat, especially when the ATA RAID community as a whole had previously been used to much less expensive, less feature-rich controllers.

The Card...

The first thing one is likely to notice is the SuperTrak100's sheer size. The controller's full-length design permits a board that features six IDE connectors, three ATA-100 ASICs, and an Intel i960 processor.

Included with the card are the following items:

  • Six single-connector, 80-conductor, 18" ATA cables;
  • 100+ page user manual (can be downloaded here);
  • Driver and utility disks;
  • Three SuperSwap hot-swap drive enclosures (SuperTrak Pro only)

The Software...

RAID arrays can be created, viewed, and deleted via the SuperTrak-100's BIOS (a.k.a. SuperBuild). Arrays may be created manually (allowing more user control) or via what Promise calls "Auto Setup." The user may also assign one or more drives as a hot spare as well as designating a bootable array.

The package also includes software for remote monitoring and configuration of arrays. The utility, SuperCheck, permits creation and deletion of arrays and configuration of several array parameters. Read and write cache may be independently toggled and several cache-policy settings may also be set. These include "flush frequency timer" (the amount of time a block of data written to the cache can remain there until it is written - i.e., flushed - to the drives), "dirty threshold flush start" (when the percentage of dirty blocks in the cache exceeds the threshold, flushing begins automatically), and dirty threshold flush stop (Flushing stops when the percentage of dirty blocks in the cache falls below this threshold.) In this review, all of the above cache policy parameters were left at their default settings (2 seconds, 50%, and 5%, respectively).

SuperCheck also displays array statistics such as individual drive information, SuperSwap fan/temperature information, and a plethora of cache statistics such as read hits, write hits, dirty usage, and the number of I/O requests made to the drives themselves (i.e., cache misses).

SuperCheck allows remote monitoring of all SuperTrak arrays as long as the machine housing the array is accessible from the remote location (either directly, or via another machine running the Message Server utility). The utility also includes an email-alert notification feature that sends a message to your email address if a drive and/or array connected to the SuperTrak fails. This feature is very common on SCSI RAID cards; few deny its usefulness.

Finally, the SuperCheck facilitates array synchronization. This feature compares two mirrored drives sector by sector to ensure that they are identical. If they aren't, data from the primary drive is automatically copied to the secondary drive. The user may also schedule synchronizations for a later time.

Confused? The StorageReview.com RAID Guide explains all!


The Escalade 6400...

Although Promise commands the majority of the ATA RAID market, several other companies realize the industry's potential. 3ware is such a company- in April of 1999, they introduced their first series of ATA RAID controllers: the DiskSwitch 4 series. Soon afterwards, 3ware introduced the Escalade 5000 series in 2, 4, and 8-channel configurations (model numbers 5200, 5400, and 5800, respectively). Contemporary ATA drives were rapidly approching the limit on ATA-33, outdating the 5000 series.

To remedy the situation, 3ware released the Escalade 6000 series last summer. Featuring the same 2, 4, and 8-channel options, the 6000 series differs mainly through its ATA-66 support. Like the SuperTrak, the Escalade series has a dedicated ATA channel for each drive; therefore, its ATA-66 interface should not be a limiting factor in the near future.

Escalade 6400 Specifications:

  • 4 Ultra ATA/66 or Ultra ATA/33 drives;
  • RAID levels 5, 0, 1, and 10;
  • On-board processor reduces CPU overhead;
  • TwinStor technology improves RAID 1 and RAID 10 beyond simple mirroring for redundancy. 3ware's advanced adaptive algorithms and drive profiling speed data access, yielding read performance that rivals RAID 0 striping for both large data files and smaller randomly distributed transactions;
  • DiskSwitch architecture replaces the shared bus found in SCSI systems with a multiplexed data path that speeds data into system memory without burdening the host CPU;
  • Greater than 100MB/sec sustained reads;
  • Greater than 84MB/sec sustained writes;
  • Stripe size selectable from 64K to 1MB;
  • Elevator seeking, command queuing;
  • Hot swap and hot spare capability;
  • Windows 98, Windows NT 4.0, Windows 2000, and Linux (Red Hat 6.1, 6.2, SuSE 6.3, 6.4, TurboLinux 6.02 support. Driver available in Open Source Kernel 2.2.15 and beyond);
  • 3-year warranty.

The Card...

Like the SuperTrak 100, the Escalade 6400 is a full-length PCI card. It features four ATA connectors, two ATA-66 ASICs, and a dedicated RISC processor. The controller features four green LEDs - one next to each ATA connector. They individually light up whenever data is being transferred through each of their respective channels. These LEDs are a welcome feature to folks like us who always want to know as much as possible about what's going on with our hardware at any given time.

The Escalade includes the following items:

  • Four single-connector, 80-conductor 18" ATA cables;
  • 100+ page user manual;
  • Driver disks and utility CD;
  • Two Y-splitter cables to connect 2 drives to a single power supply connector.

RAID 5 support for the Escalade series?

On February 1st of this year, 3ware introduced a firmware upgrade that brings RAID 5 functionality to the Escalade 6400 and 6800 (not the 6200, of course, since RAID 5 requires at least three drives). Until this point, the Escalade had only supported RAID 0, 1, and 10.

How can a firmware upgrade enable a card to support RAID 5? There are two possibilities: 1) The card was always meant to support RAID 5 yet had it delayed by half a year or 2) The card was not meant to support RAID 5, but a decision was made sometime after the its release (perhaps due to the SuperTrak's RAID 5 support?) to implement RAID 5 at the firmware level. More on this later...

DiskSwitch and TwinStor: Quality engineering, or fancy marketing?

When one visits 3ware's website, it's virtually impossible to miss the words "DiskSwitch" and "TwinStor." These terms have been trademarked by 3ware, and refer to architectural features of their Escalade series of cards. Both terms are covered in great detail on 3ware's site, so we'll just present an overview here. Those seeking more detail may find the respective whitepapers here and here.

DiskSwitch

The term DiskSwitch actually comprises three aspects of the Escalade controllers: AccelerATA data channels, a packet switching controller, and the card's on-board RISC processor.

AccelerATA is 3ware's term for the dedicated ATA channel that each drive enjoys on an Escalade. These channels, along with the packet switching controller and on-board RISC processor, provide each ATA drive with full bandwidth to the host. 3ware claims to be the first company to use packet switching on a storage controller to both lower latency and provide exceptional scaling as more drives are added.

TwinStor

3ware's TwinStor architecture - used in RAID 1 and RAID 10 arrays - profiles each drive to allow for "maximum performance for the particular brand of drive used." TwinStor then maintains a statistical history of data accesses to allow sequential and random reads to be distinguished from one other. For random reads, TwinStor reorders I/O's and load balances them between all drives in the array; for sequential I/O's, TwinStor utilizes all drives in the array for maximum transfer rates. Since each drive in a RAID 1 or RAID 10 array has a mirror, proper load balancing of random I/O can provide performance which rivals RAID 0 arrays.

The Software...

3ware's Escalade cards include a BIOS that allows arrays to be created or deleted. The BIOS also permits rebuilds and features a toggle for the write cache of an array. (note: Escalade cards do not feature onboard cache; the write cache referred to in BIOS is actually the buffer of the drives themselves). The BIOS also sports a verify feature similar to Promise's synchronization procedure.

Just like Promise's SuperCheck, 3ware's software utility, 3DM, allows arrays to be viewed, created, deleted, and maintained. Drives may be added, removed, or designated as hot spares. Also like the SuperCheck utility, 3DM can run in the background as a daemon and provide event notification functionality. The utility can either send an email or trigger a local event in Windows when an array becomes degraded or non-functional. In addition, the card's audible alarm may be toggled and the rebuild rate may be changed (faster rebuilding means lower system performance during rebuilds, and vice-versa).

The utility may be accessed via a web browser from any remote location as long as the two machines can connect to each other over a network. All of the utility's aforementioned features are accessible remotely - quite convenient.

A Note on Hot Swapping...

Though both cards claim hot swap capability, there's a difference between the two. As mentioned above, the SuperTrak Pro comes with three SuperSwap hot swap ATA drive enclosures. These enclosures allow drives to be inserted or removed without having to power down the machine. The Escalade, however, comes with no such enclosure, so a user wishing to take advantage of its hot swap support must purchase third-party hot swap drive enclosures.

Confused? The StorageReview.com RAID Guide explains all!


Methodology...

StorageReview's RAID reviews will feature WinBench and IOMeter performance measures. The WinBench testing methodology will be identical to that used in the Same Drives - Same Performance? article. To quote from the article:

"The following WinBench tests were run five times on each drive: Disk/Read Transfer Rate, Disk CPU utilization, Disk Access Time, Business Disk WinMark 99, and High-End Disk WinMark 99. A single run of each of the above tests was considered a "trial", with five trials being conducted for each drive. The machine was rebooted between trials. Each test's final score represents the average of the five runs."

IOMeter, on the other hand, isn't quite as simple. When considered as a whole, StorageReview's three IOMeter access patterns (Workstation, File Server, and Database) are heavily biased towards reads, about 82% to 18%. Though we believe each respective test's read/write distribution represents the tasks from which these patterns draw their names, we realize that write performance becomes a particularly important issue with RAID arrays - especially with RAID levels that use parity.

With this in mind, two additional IOMeter access patterns were created to stress write performance. One stresses random writes; the other sequential writes. These new access patterns, as well as the original three, are outlined below.

Access Patterns
% of Access Specification Transfer Size Request % Reads % Random
File Server Access Pattern (as defined by Intel)
10% 0.5 KB 80% 100%
5% 1 KB 80% 100%
5% 2 KB 80% 100%
60% 4 KB 80% 100%
2% 8 KB 80% 100%
4% 16 KB 80% 100%
4% 32 KB 80% 100%
10% 64 KB 80% 100%
Workstation Access Pattern (as defined by StorageReview.com)
100% 8 KB 80% 80%
Database Access Pattern (as defined by Intel/StorageReview.com)
100% 8 KB 67% 100%
Random Write Pattern (as defined by StorageReview.com)
100% 8 KB 0% 100%
Sequential Write Pattern (as defined by StorageReview.com)
100% 256 KB 0% 0%

Information on the testbed may also be found in the aforementioned Same Drives - Same Performance? article. As was the case there, four Maxtor DiamondMax 80 drives will be used for all ATA RAID testing.

The Benchmarks...

Because the Escalade 6400 "only" supports RAID levels 0, 1, 10, and 5, we can obviously compare it to the SuperTrak only at these four RAID levels. SuperTrak RAID 3-4 benchmarks will follow these comparisons.

The SuperTrak's read and write cache were both enabled. The Escalade's "write cache" setting was also enabled. Additionally, all benchmarks in this article were drawn with a stripe size of 64k (save for the SuperTrak RAID 3 tests - see below). We plan on exploring the effect of both stripe size and caching on RAID performance in a future article.

All SuperTrak tests were conducted with BIOS revision 1.00 (Build 11) and driver version 1.0.0.0. The Escalade RAID 0/1/10 tests were performed with BIOS revision 1.04.00.009, firmware revision 1.00.43.003, and driver version 1.09.00.005. The firmware was then updated to revision 1.01.18.001 (BIOS revision 1.06.00.009), and driver revision 1.09.000.015 was installed to enable RAID 5 functionality. Since the new firmware and drivers hit the web only days before this review's publish date (3ware's press release still claims an availability date of February 15), and since a trial of IOMeter tests with the new firmware and drivers at RAID levels 0/1/10 revealed no significant performance differences, we chose not to delay this review further by re-running the RAID 0/1/10 tests under the new firmware.

Confused? The StorageReview.com RAID Guide explains all!


WinBench Results...

Let's start out with some base scores utilizing a single drive... this gives us something to compare RAID results against. The table below presents WinBench scores for a single DiamondMax 80 on the following controllers: the Abit SL6's on-board ATA controller, a Promise Ultra66, the SuperTrak-100, and the Escalade 6400:

Jump To Analysis Ziff Davis WinBench 99 under Windows 2000 Professional using NTFS Jump To Analysis
Benchmark Single-Drive Base Scores
Abit's SL6 on-board ATAPromise Ultra66SuperTrak-100Escalade 6400
Business Disk WinMark 99 (KB/sec) 6484 5820 4872 4454
High-End Disk WinMark 99 (KB/sec) 16220 14460 13580 15200
AVS/Express 3.4 (KB/sec) 13600 13160 11560 15300
FrontPage 98 (KB/sec) 65860 50980 58400 62480
MicroStation SE (KB/sec) 21480 18320 17480 19580
Photoshop 4.0 (KB/sec) 8534 8668 7154 8502
Premiere 4.2 (KB/sec) 14760 12440 10676 13460
Sound Forge 4.0 (KB/sec) 18540 16040 19480 14820
Visual C++ (KB/sec) 17000 14200 14460 14520
Disk/Read Transfer RateStorageReview.com
Beginning (KB/sec) 29800 29733 20200 29800
End (KB/sec) 17500 17500 17500 17500
Disk Access Time (ms) 15.22 15.26 16.12 15.04
Disk CPU Utilization (%) 2.91 3.04 2.89 3.11

With a single drive, both the SuperTrak and Escalade turn in a Business Disk Winmark score lower than that of the Ultra66. The SuperTrak's High-End score is also significantly lower. These scores seem reasonable for the SuperTrak since its sustained transfer rate is limited to about 20MB/sec when read cache is enabled. For the Escalade however, there's logical reason why its base Business score would be lower than that of the Ultra66 - its sustained transfer rate is the same, and WinBench claims its base seek time is lower.

Why is the SuperTrak limited to a sustained transfer rate of just 20MB/sec? Promise tells us it's due to both the cache and the ASICs themselves. Caches in general are bad for sustained transfer rate, Promise says, and the SuperTrak's ASICs were not optimized for STR. This limit will undoubtedly be a disappointment to anyone requiring high STR for optimal performance.

Jump To Analysis Ziff Davis WinBench 99 under Windows 2000 Professional using NTFS - RAID 0 Jump To Analysis
Benchmark 2 Drives, RAID 0 3 Drives, RAID 0 4 Drives, RAID 0
SuperTrak-100 Escalade 6400 SuperTrak-100 Escalade 6400 SuperTrak-100 Escalade 6400
Business Disk WinMark 99 (KB/sec) 4948 3944 4956 3992 5020 4372
High-End Disk WinMark 99 (KB/sec) 14620 13540 15100 14820 14840 15080
AVS/Express 3.4 (KB/sec) 11480 15660 12920 17160 11420 16360
FrontPage 98 (KB/sec) 59180 31360 59720 30560 59260 33380
MicroStation SE (KB/sec) 16100 16500 14920 15820 15340 16040
Photoshop 4.0 (KB/sec) 8572 10532 8974 12460 9172 13740
Premiere 4.2 (KB/sec) 11880 12020 12000 14680 11920 13640
Sound Forge 4.0 (KB/sec) 21460 14300 21980 14920 21640 14780
Visual C++ (KB/sec) 15980 8910 16260 9744 16340 10328
Disk/Read Transfer RateStorageReview.com
Beginning (KB/sec) 20133 59600 20100 85833 20200 102667
End (KB/sec) 20200 35000 20133 52300 17500 69600
Disk Access Time (ms) 16.22 15.24 16.06 15.18 15.94 15.40
Disk CPU Utilization (%) 2.86 3.07 2.85 3.20 88.50 88.80

RAID 0 scores from both controllers are rather unimpressive. Generally speaking, each card's Disk Winmark scores are no better than that of a single drive.

In the STR arena, the SuperTrak's 20MB/sec causes it to pale in comparison with what the Escalade achieves as more drives are added. As shown above, the Escalade manages an amazing 103MB/sec with a four drive RAID 0 array. Had we not seen it with our own eyes, we probably wouldn't believe such a score: We simply didn't think that the PCI bus's overhead would allow for anything over 90MB/sec or so.

It's also interesting to note that average access times for both cards seem to increase somewhat in RAID 0. This phenomenon is fairly consistent in RAID 0 configs, though we're not sure why. Indeed, RAID 0 should yield some positioning benefit in addition to increases in STR.

Finally, note the ridiculously high CPU utilization scores for both controllers was in a four drive RAID 0 array. We believe this to be a quirk in WinBench because, as we'll see, IOMeter shows no such CPU utilization increase.

Jump To Analysis Ziff Davis WinBench 99 under Windows 2000 Professional using NTFS - RAID 1 Jump To Analysis
Benchmark 2 Drives, RAID 1
SuperTrak-100 Escalade 6400
Business Disk WinMark 99 (KB/sec) 4912 3608
High-End Disk WinMark 99 (KB/sec) 12960 11660
AVS/Express 3.4 (KB/sec) 11540 16180
FrontPage 98 (KB/sec) 58980 27300
MicroStation SE (KB/sec) 17800 17560
Photoshop 4.0 (KB/sec) 6686 8100
Premiere 4.2 (KB/sec) 9880 9530
Sound Forge 4.0 (KB/sec) 16820 11760
Visual C++ (KB/sec) 14300 7646
Disk/Read Transfer RateStorageReview.com
Beginning (KB/sec) 20100 47600
End (KB/sec) 17500 27467
Disk Access Time (ms) 14.30 13.10
Disk CPU Utilization (%) 2.87 3.10

As was the case with RAID 0 scores, the above RAID 1 Disk Winmark scores don't seem to reflect reality. Despite the fact that each card's access time in RAID 1 is significantly lower than that recorded using single drive, and despite the fact that STR is either much better (Escalade) or equal (SuperTrak) to a single drive, RAID 1 Disk Winmark scores are significantly worse. We have a very, very difficult time believing that these scores represent actual performance.

As mentioned above, average access time decreases significantly under RAID 1 for both cards. This indicates that both cards perform some type of intelligent load balancing between the two drives. For example, when an I/O request is made to a RAID 1 array, there are always two drives available to service the request since each drive has the exact same data. Therefore, an intelligent RAID card can judge which drive's actuator is closer to the needed data and direct the request accordingly. This tends to result in lower average access times and better performance.

The Escalade's STR of 50MB/sec is about 2/3 higher than that of a single drive due to 3ware's TwinStor architecture. Needless to say, it's a significant improvement.

Note: Although there are fault tolerance-related differences between RAID 01 and RAID 10, there isn't a theoretical reason for performance differences between these two array levels. Therefore, we feel it's fair to compare the SuperTrak's RAID 01 performance to the Escalade's RAID 10 performance.

Jump To Analysis Ziff Davis WinBench 99 under Windows 2000 Professional using NTFS - RAID 01/10 Jump To Analysis
Benchmark 4 Drives, SuperTrack-100 in RAID 01, Escalade 6400 in RAID 10
SuperTrak-100 Escalade 6400
Business Disk WinMark 99 (KB/sec) 4924 3892
High-End Disk WinMark 99 (KB/sec) 13500 13000
AVS/Express 3.4 (KB/sec) 11660 14980
FrontPage 98 (KB/sec) 58300 32200
MicroStation SE (KB/sec) 16380 16600
Photoshop 4.0 (KB/sec) 7388 10400
Premiere 4.2 (KB/sec) 10480 10584
Sound Forge 4.0 (KB/sec) 18160 13280
Visual C++ (KB/sec) 14580 8862
Disk/Read Transfer RateStorageReview.com
Beginning (KB/sec) 20067 59933
End (KB/sec) 20100 46633
Disk Access Time (ms) 14.36 13.46
Disk CPU Utilization (%) 2.85 3.04

Again we face with seemingly illogical Disk Winmark scores. There's no reason why Disk Winmark scores should be so poor here, especially given the fact that both cards enjoy significantly decreased access times in RAID 01/10 (relative to a single drive). In addition, the Escalade's STR doubles in RAID 10; even so, Disk Winmark scores don't appear to reflect this.

Jump To Analysis Ziff Davis WinBench 99 under Windows 2000 Professional using NTFS - RAID 5 Jump To Analysis
Benchmark 3 Drives, RAID 5 4 Drives, RAID 5
SuperTrak-100 Escalade 6400 SuperTrak-100 Escalade 6400
Business Disk WinMark 99 (KB/sec) 2542 2120 3998 2378
High-End Disk WinMark 99 (KB/sec) 1856 4792 9266 6488
AVS/Express 3.4 (KB/sec) 10018 13580 11420 14320
FrontPage 98 (KB/sec) 27880 12200 56760 12480
MicroStation SE (KB/sec) 10642 10644 13680 11880
Photoshop 4.0 (KB/sec) 675 1884 4148 1988
Premiere 4.2 (KB/sec) 1020 2304 6688 2522
Sound Forge 4.0 (KB/sec) 1560 3728 12520 4070
Visual C++ (KB/sec) 2290 3328 10060 3590
Disk/Read Transfer RateStorageReview.com
Beginning (KB/sec) 11767 58833 12533 81900
End (KB/sec) 9647 34933 10933 52233
Disk Access Time (ms) 16.48 15.20 16.46 15.30
Disk CPU Utilization (%) 2.85 3.11 2.85 3.10

Much to our dismay, horrendous WinBench results continued under RAID 5. Note the Disk WinMark scores for both cards: they are not typos. With the SuperTrak, the benchmark would consistently "freeze" 39% through the High-End test for half an hour or more, only to then start up again, and eventually finish. It's certainly not typical behavior... and it's certainly something that can botch up a timed test. This behavior was repeatable on both the testbed and on another machine; we simply don't know if it's an issue with WinBench itself, or the SuperTrak. Regardless, the results are obviously not representative of performance.

With the Escalade, there was no "freeze" during the High-End test. However, the results are nonetheless obviously unrepresentative. Once again, the inaccurate results were repeatable on a separate system.

Aside from the problems with Disk Winmark scores, it's worth noting that the SuperTrak's RAID 5 access time is higher than that of a single drive configuration as well as any other supported RAID level. This is disappointing, as the SuperTrak is marketed as a RAID 5 controller.

The SuperTrak's sustained transfer rate graph is somewhat odd, but we're not sure why. The overall appearance of the graph is very consistent- the dips and peaks in STR always occur in about the same locations. The Escalade's graph is much more "normal", however.

Confused? The StorageReview.com RAID Guide explains all!


IOMeter Scores...

Considering the irregular results that WinBench delivered, IOMeter is a breath of fresh air... it helps us ascertain the true performance of both cards. We're pleased to report that we have no doubt the results below are representative of performance - Promise confirmed that all of the SuperTrak's IOMeter scores are on the mark. We're confident in IOMeter's ability to deliver accurate Escalade results as well.

Base scores...

Jump To AnalysisIOMeter - File Server Access PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
Abit SL6 on-board ATA64.11 0.70 15.59 ms 0.76 % 84.36
Promise Ultra6665.51 0.69 15.26 ms 0.70 % 93.59
SuperTrak-100 Base61.23 0.66 16.33 ms 0.58 % 105.57
Escalade 6400 Base65.77 0.71 15.20 ms 0.66 % 99.65
Load = Very Light
Abit SL6 on-board ATA66.91 0.73 59.77 ms 0.75 % 89.21
Promise Ultra6667.85 0.73 58.94 ms 0.73 % 92.95
SuperTrak-100 Base63.23 0.68 63.25 ms 0.71 % 89.06
Escalade 6400 Base66.60 0.72 60.06 ms 0.70 % 95.14
Load = Light
Abit SL6 on-board ATA76.83 0.85 208.21 ms 0.87 % 88.31
Promise Ultra6678.11 0.85 204.80 ms 0.89 % 87.76
SuperTrak-100 Base72.58 0.78 220.37 ms 0.81 % 89.60
Escalade 6400 Base75.80 0.82 211.07 ms 0.82 % 92.44
Load = Moderate
Abit SL6 on-board ATA86.75 0.95 737.33 ms 1.04 % 83.41
Promise Ultra6687.72 0.94 729.34 ms 1.02 % 86.00
SuperTrak-100 Base80.79 0.87 791.42 ms 0.96 % 84.16
Escalade 6400 Base84.57 0.91 756.63 ms 0.91 % 92.93
Load = Heavy
Abit SL6 on-board ATA94.68 1.02 2703.59 ms 1.48 % 63.97
Promise Ultra6696.97 1.05 2634.26 ms 1.53 % 63.38
SuperTrak-100 Base88.24 0.95 2894.77 ms 1.14 % 77.40
Escalade 6400 Base96.91 1.05 2639.37 ms 1.43 % 67.77

Jump To AnalysisIOMeter - Workstation Access PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
Abit SL6 on-board ATA75.20 0.59 13.29 ms 0.84 % 89.52
Promise Ultra6676.05 0.59 13.14 ms 0.81 % 93.89
SuperTrak-100 Base71.55 0.56 13.97 ms 0.72 % 99.38
Escalade 6400 Base77.43 0.60 12.91 ms 0.79 % 98.01
Load = Very Light
Abit SL6 on-board ATA77.47 0.61 51.63 ms 0.76 % 101.93
Promise Ultra6677.96 0.61 51.30 ms 0.81 % 96.25
SuperTrak-100 Base73.54 0.57 54.39 ms 0.87 % 84.53
Escalade 6400 Base77.98 0.61 51.29 ms 0.85 % 91.74
Load = Light
Abit SL6 on-board ATA88.26 0.69 181.26 ms 1.00 % 88.26
Promise Ultra6689.49 0.70 178.74 ms 1.03 % 86.88
SuperTrak-100 Base83.35 0.65 191.92 ms 0.95 % 87.74
Escalade 6400 Base86.38 0.67 185.20 ms 0.94 % 91.89
Load = Moderate
Abit SL6 on-board ATA98.92 0.77 646.73 ms 1.05 % 94.21
Promise Ultra6699.85 0.78 640.55 ms 1.14 % 87.59
SuperTrak-100 Base92.14 0.72 694.32 ms 1.05 % 87.75
Escalade 6400 Base95.44 0.75 670.39 ms 1.01 % 94.50
Load = Heavy
Abit SL6 on-board ATA110.14 0.86 2320.72 ms 1.52 % 72.46
Promise Ultra66110.34 0.86 2316.28 ms 1.66 % 66.47
SuperTrak-100 Base100.51 0.79 2542.63 ms 1.34 % 75.01
Escalade 6400 Base110.31 0.86 2317.88 ms 1.48 % 74.53

Jump To AnalysisIOMeter - Database Access PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
Abit SL6 on-board ATA68.64 0.54 14.56 ms 0.69 % 99.48
Promise Ultra6668.73 0.54 14.55 ms 0.78 % 88.12
SuperTrak-100 Base65.58 0.51 15.24 ms 0.72 % 91.08
Escalade 6400 Base70.04 0.55 14.27 ms 0.75 % 93.39
Load = Very Light
Abit SL6 on-board ATA70.33 0.55 56.87 ms 0.74 % 95.04
Promise Ultra6670.79 0.55 56.50 ms 0.79 % 89.61
SuperTrak-100 Base67.06 0.52 59.64 ms 0.77 % 87.09
Escalade 6400 Base71.18 0.56 56.19 ms 0.78 % 91.26
Load = Light
Abit SL6 on-board ATA79.27 0.62 201.79 ms 0.87 % 91.11
Promise Ultra6680.22 0.63 199.41 ms 0.85 % 94.38
SuperTrak-100 Base75.06 0.59 213.12 ms 0.87 % 86.28
Escalade 6400 Base78.72 0.62 203.20 ms 0.95 % 82.86
Load = Moderate
Abit SL6 on-board ATA87.89 0.69 727.76 ms 0.90 % 97.66
Promise Ultra6688.75 0.69 720.78 ms 1.04 % 85.34
SuperTrak-100 Base82.32 0.64 777.13 ms 0.87 % 94.62
Escalade 6400 Base87.25 0.68 733.46 ms 0.99 % 88.13
Load = Heavy
Abit SL6 on-board ATA96.50 0.75 2647.68 ms 1.49 % 64.77
Promise Ultra6697.24 0.76 2627.74 ms 1.42 % 68.48
SuperTrak-100 Base89.31 0.70 2860.85 ms 1.28 % 69.77
Escalade 6400 Base97.45 0.76 2623.54 ms 1.33 % 73.27

Jump To AnalysisIOMeter - Random Write PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
Abit SL6 on-board ATA104.48 0.82 9.57 ms 1.08 % 96.74
Promise Ultra66105.25 0.82 9.50 ms 1.17 % 89.96
SuperTrak-100 Base107.65 0.84 9.28 ms 1.02 % 105.54
Escalade 6400 Base109.01 0.85 9.17 ms 1.02 % 106.87
Load = Very Light
Abit SL6 on-board ATA104.36 0.82 38.32 ms 1.00 % 104.36
Promise Ultra66105.17 0.82 38.03 ms 1.13 % 93.07
SuperTrak-100 Base107.63 0.84 37.16 ms 1.14 % 94.41
Escalade 6400 Base108.73 0.85 36.78 ms 1.17 % 92.93
Load = Light
Abit SL6 on-board ATA104.33 0.82 153.34 ms 1.06 % 98.42
Promise Ultra66105.12 0.82 152.17 ms 1.09 % 96.44
SuperTrak-100 Base107.83 0.84 148.35 ms 1.10 % 98.03
Escalade 6400 Base108.94 0.85 146.85 ms 1.18 % 92.32
Load = Moderate
Abit SL6 on-board ATA105.80 0.83 604.69 ms 1.08 % 97.96
Promise Ultra66106.68 0.83 599.56 ms 1.23 % 86.73
SuperTrak-100 Base109.37 0.85 584.67 ms 1.13 % 96.79
Escalade 6400 Base109.37 0.85 585.01 ms 1.26 % 86.80
Load = Heavy
Abit SL6 on-board ATA114.16 0.89 2239.30 ms 1.71 % 66.76
Promise Ultra66115.36 0.90 2215.46 ms 1.68 % 68.67
SuperTrak-100 Base117.65 0.92 2173.27 ms 1.51 % 77.91
Escalade 6400 Base116.02 0.91 2202.95 ms 1.66 % 69.89

Jump To AnalysisIOMeter - Sequential Write PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
Abit SL6 on-board ATA110.14 27.54 9.07 ms 6.75 % 16.32
Promise Ultra66110.39 27.60 9.05 ms 3.56 % 31.01
SuperTrak-100 Base51.24 12.81 19.51 ms 2.25 % 22.77
Escalade 6400 Base110.56 27.64 9.05 ms 3.69 % 29.96
Load = Very Light
Abit SL6 on-board ATA109.91 27.48 36.39 ms 6.82 % 16.12
Promise Ultra66110.51 27.63 36.19 ms 3.74 % 29.55
SuperTrak-100 Base51.39 12.85 77.83 ms 2.40 % 21.41
Escalade 6400 Base110.65 27.66 36.14 ms 4.25 % 26.04
Load = Light
Abit SL6 on-board ATA109.90 27.48 145.56 ms 7.36 % 14.93
Promise Ultra66110.55 27.64 144.71 ms 3.92 % 28.20
SuperTrak-100 Base51.41 12.85 311.24 ms 2.43 % 21.16
Escalade 6400 Base110.72 27.58 144.49 ms 4.21 % 26.30
Load = Moderate
Abit SL6 on-board ATA109.97 27.49 427.35 ms 12.40 % 8.87
Promise Ultra66110.50 27.62 579.16 ms 6.13 % 18.03
SuperTrak-100 Base51.41 12.85 1244.89 ms 2.88 % 17.85
Escalade 6400 Base110.76 27.69 577.80 ms 5.48 % 20.21
Load = Heavy
Abit SL6 on-board ATA108.61 27.15 2126.35 ms 11.58 % 9.38
Promise Ultra66109.43 27.36 2115.21 ms 5.65 % 19.37
SuperTrak-100 Base51.07 12.77 4514.37 ms 2.83 % 18.05
Escalade 6400 Base109.48 27.37 2247.14 ms 5.67 % 19.31

The Escalade's base, single-drive scores are more or less the same as the Ultra66. The SuperTrak, however, lags somewhat here.

Confused? The StorageReview.com RAID Guide explains RAID 0!


RAID 0...

Click here to jump to analysis text...

Jump To AnalysisIOMeter - File Server Access PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
SuperTrak-100 RAID 0 w/ 2 Drives66.77 0.72 14.97 ms 0.60 % 111.28
Escalade 6400 RAID 0 w/ 2 Drives71.71 0.77 13.94 ms 0.68 % 105.46
Load = Very Light
SuperTrak-100 RAID 0 w/ 2 Drives67.46 0.73 59.29 ms 0.74 % 91.16
Escalade 6400 RAID 0 w/ 2 Drives109.55 1.20 36.51 ms 1.18 % 92.84
Load = Light
SuperTrak-100 RAID 0 w/ 2 Drives76.03 0.82 210.39 ms 0.82 % 92.72
Escalade 6400 RAID 0 w/ 2 Drives122.64 1.32 130.45 ms 1.23 % 99.71
Load = Moderate
SuperTrak-100 RAID 0 w/ 2 Drives83.00 0.90 770.43 ms 0.88 % 94.32
Escalade 6400 RAID 0 w/ 2 Drives135.94 1.48 470.67 ms 1.39 % 97.80
Load = Heavy
SuperTrak-100 RAID 0 w/ 2 Drives92.09 0.99 2775.03 ms 1.16 % 79.39
Escalade 6400 RAID 0 w/ 2 Drives157.68 1.73 1622.08 ms 2.11 % 74.73

Jump To AnalysisIOMeter - Workstation Access Pattern
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
SuperTrak-100 RAID 0 w/ 2 Drives77.96 0.61 135.69 ms 0.79 % 98.68
Escalade 6400 RAID 0 w/ 2 Drives84.93 0.66 11.77 ms 0.89 % 95.43
Load = Very Light
SuperTrak-100 RAID 0 w/ 2 Drives78.29 0.61 253.15 ms 0.84 % 93.20
Escalade 6400 RAID 0 w/ 2 Drives130.58 1.02 30.62 ms 1.36 % 96.01
Load = Light
SuperTrak-100 RAID 0 w/ 2 Drives86.55 0.68 679.51 ms 1.02 % 84.85
Escalade 6400 RAID 0 w/ 2 Drives144.55 1.13 110.67 ms 1.60 % 90.34
Load = Moderate
SuperTrak-100 RAID 0 w/ 2 Drives95.08 0.74 1702.43 ms 1.10 % 86.44
Escalade 6400 RAID 0 w/ 2 Drives158.87 1.24 402.74 ms 1.66 % 95.70
Load = Heavy
SuperTrak-100 RAID 0 w/ 2 Drives105.11 0.82 5362.59 ms 1.40 % 75.08
Escalade 6400 RAID 0 w/ 2 Drives183.84 1.44 1391.58 ms 2.41 % 76.28

Jump To AnalysisIOMeter - Database Access PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
SuperTrak-100 RAID 0 w/ 2 Drives76.07 0.59 13.14 ms 0.83 % 91.65
Escalade 6400 RAID 0 w/ 2 Drives82.21 0.64 12.16 ms 0.86 % 95.59
Load = Very Light
SuperTrak-100 RAID 0 w/ 2 Drives76.69 0.60 52.15 ms 0.85 % 90.22
Escalade 6400 RAID 0 w/ 2 Drives121.60 0.95 32.89 ms 1.25 % 97.28
Load = Light
SuperTrak-100 RAID 0 w/ 2 Drives84.41 0.66 189.52 ms 0.95 % 88.85
Escalade 6400 RAID 0 w/ 2 Drives133.57 1.04 119.76 ms 1.41 % 94.73
Load = Moderate
SuperTrak-100 RAID 0 w/ 2 Drives91.27 0.71 700.80 ms 1.00 % 91.27
Escalade 6400 RAID 0 w/ 2 Drives146.21 1.14 437.64 ms 1.55 % 94.33
Load = Heavy
SuperTrak-100 RAID 0 w/ 2 Drives99.86 0.78 2558.77 ms 1.39 % 71.84
Escalade 6400 RAID 0 w/ 2 Drives165.85 1.30 1542.11 ms 2.17 % 76.43

Jump To AnalysisIOMeter - Random Write PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
SuperTrak-100 RAID 0 w/ 2 Drives198.06 1.55 5.05 ms 1.91 % 103.70
Escalade 6400 RAID 0 w/ 2 Drives191.39 1.50 5.22 ms 1.96 % 97.65
Load = Very Light
SuperTrak-100 RAID 0 w/ 2 Drives197.76 1.54 20.22 ms 1.93 % 102.47
Escalade 6400 RAID 0 w/ 2 Drives191.77 1.50 20.85 ms 1.99 % 96.37
Load = Light
SuperTrak-100 RAID 0 w/ 2 Drives198.46 1.55 80.60 ms 2.02 % 98.25
Escalade 6400 RAID 0 w/ 2 Drives191.68 1.50 83.45 ms 2.07 % 92.60
Load = Moderate
SuperTrak-100 RAID 0 w/ 2 Drives199.28 1.56 321.09 ms 2.10 % 94.90
Escalade 6400 RAID 0 w/ 2 Drives192.77 1.51 331.91 ms 2.18 % 88.43
Load = Heavy
SuperTrak-100 RAID 0 w/ 2 Drives207.37 1.62 1233.42 ms 2.85 % 72.76
Escalade 6400 RAID 0 w/ 2 Drives199.15 1.56 1284.57 ms 2.83 % 70.37

Jump To AnalysisIOMeter - Sequential Write PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
SuperTrak-100 RAID 0 w/ 2 Drives61.98 15.50 16.13 ms 2.57 % 24.12
Escalade 6400 RAID 0 w/ 2 Drives221.12 55.28 4.52 ms 7.74 % 28.57
Load = Very Light
SuperTrak-100 RAID 0 w/ 2 Drives62.17 15.54 64.34 ms 2.59 % 24.00
Escalade 6400 RAID 0 w/ 2 Drives221.48 55.37 18.05 ms 8.91 % 24.86
Load = Light
SuperTrak-100 RAID 0 w/ 2 Drives62.23 15.56 257.08 ms 2.60 % 23.93
Escalade 6400 RAID 0 w/ 2 Drives221.47 55.37 72.23 ms 9.29 % 23.84
Load = Moderate
SuperTrak-100 RAID 0 w/ 2 Drives62.23 15.56 1028.60 ms 3.85 % 16.16
Escalade 6400 RAID 0 w/ 2 Drives221.54 55.39 288.87 ms 10.41 % 21.28
Load = Heavy
SuperTrak-100 RAID 0 w/ 2 Drives61.92 15.48 3721.66 ms 3.36 % 18.43
Escalade 6400 RAID 0 w/ 2 Drives220.51 55.13 1115.61 ms 11.88 % 18.56

Jump To AnalysisIOMeter - File Server Access PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
SuperTrak-100 RAID 0 w/ 3 Drives68.45 0.74 14.60 ms 0.66 % 103.71
Escalade 6400 RAID 0 w/ 3 Drives73.46 0.79 13.61 ms 0.82 % 89.59
Load = Very Light
SuperTrak-100 RAID 0 w/ 3 Drives68.76 0.75 58.17 ms 0.79 % 87.04
Escalade 6400 RAID 0 w/ 3 Drives146.34 1.58 27.33 ms 1.64 % 89.23
Load = Light
SuperTrak-100 RAID 0 w/ 3 Drives75.67 0.81 211.42 ms 0.85 % 89.02
Escalade 6400 RAID 0 w/ 3 Drives174.30 1.88 91.78 ms 4.92 % 35.43
Load = Moderate
SuperTrak-100 RAID 0 w/ 3 Drives83.62 0.91 764.89 ms 0.93 % 89.91
Escalade 6400 RAID 0 w/ 3 Drives194.35 2.10 329.21 ms 2.20 % 88.34
Load = Heavy
SuperTrak-100 RAID 0 w/ 3 Drives93.58 1.00 2731.41 ms 1.21 % 77.34
Escalade 6400 RAID 0 w/ 3 Drives227.12 2.46 1126.42 ms 3.09 % 73.50

Jump To AnalysisIOMeter - Workstation Access PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
SuperTrak-100 RAID 0 w/ 3 Drives79.38 0.62 12.59 ms 0.83 % 95.64
Escalade 6400 RAID 0 w/ 3 Drives86.20 0.67 11.60 ms 0.93 % 92.69
Load = Very Light
SuperTrak-100 RAID 0 w/ 3 Drives79.88 0.62 50.07 ms 0.87 % 91.82
Escalade 6400 RAID 0 w/ 3 Drives166.87 1.30 23.97 ms 1.77 % 94.28
Load = Light
SuperTrak-100 RAID 0 w/ 3 Drives86.21 0.67 185.55 ms 0.95 % 90.75
Escalade 6400 RAID 0 w/ 3 Drives205.43 1.60 77.86 ms 2.13 % 96.45
Load = Moderate
SuperTrak-100 RAID 0 w/ 3 Drives95.48 0.75 669.89 ms 1.02 % 93.61
Escalade 6400 RAID 0 w/ 3 Drives227.60 1.78 281.17 ms 2.38 % 95.63
Load = Heavy
SuperTrak-100 RAID 0 w/ 3 Drives106.80 0.83 2393.52 ms 1.46 % 73.15
Escalade 6400 RAID 0 w/ 3 Drives263.69 2.06 970.16 ms 3.50 % 75.34

Jump To AnalysisIOMeter - Database Access PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
SuperTrak-100 RAID 0 w/ 3 Drives78.38 0.61 12.75 ms 0.84 % 93.31
Escalade 6400 RAID 0 w/ 3 Drives84.31 0.66 11.86 ms 0.91 % 92.65
Load = Very Light
SuperTrak-100 RAID 0 w/ 3 Drives78.12 0.61 51.20 ms 0.84 % 93.00
Escalade 6400 RAID 0 w/ 3 Drives170.70 1.33 23.43 ms 1.93 % 88.45
Load = Light
SuperTrak-100 RAID 0 w/ 3 Drives84.40 0.66 189.57 ms 0.99 % 85.25
Escalade 6400 RAID 0 w/ 3 Drives190.08 1.48 84.17 ms 2.04 % 93.18
Load = Moderate
SuperTrak-100 RAID 0 w/ 3 Drives93.10 0.73 687.21 ms 1.10 % 84.64
Escalade 6400 RAID 0 w/ 3 Drives209.49 1.64 305.43 ms 2.28 % 91.88
Load = Heavy
SuperTrak-100 RAID 0 w/ 3 Drives104.37 0.82 2447.82 ms 1.41 % 74.02
Escalade 6400 RAID 0 w/ 3 Drives235.98 1.84 1084.20 ms 3.13 % 75.39

Jump To AnalysisIOMeter - Random Write PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
SuperTrak-100 RAID 0 w/ 3 Drives295.93 2.31 3.38 ms 2.64 % 112.09
Escalade 6400 RAID 0 w/ 3 Drives280.47 2.19 3.56 ms 2.83 % 99.11
Load = Very Light
SuperTrak-100 RAID 0 w/ 3 Drives295.39 2.31 13.54 ms 2.97 % 99.46
Escalade 6400 RAID 0 w/ 3 Drives281.26 2.20 14.22 ms 2.99 % 94.07
Load = Light
SuperTrak-100 RAID 0 w/ 3 Drives296.63 2.32 53.93 ms 2.98 % 99.54
Escalade 6400 RAID 0 w/ 3 Drives281.78 2.20 56.76 ms 2.96 % 95.20
Load = Moderate
SuperTrak-100 RAID 0 w/ 3 Drives296.69 2.32 215.68 ms 3.20 % 92.72
Escalade 6400 RAID 0 w/ 3 Drives284.74 2.22 224.61 ms 3.18 % 89.54
Load = Heavy
SuperTrak-100 RAID 0 w/ 3 Drives305.16 2.38 838.30 ms 4.00 % 76.29
Escalade 6400 RAID 0 w/ 3 Drives289.87 2.26 882.68 ms 4.16 % 69.68

Jump To AnalysisIOMeter - Sequential Write PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
SuperTrak-100 RAID 0 w/ 3 Drives67.45 16.86 14.82 ms 2.49 % 27.09
Escalade 6400 RAID 0 w/ 3 Drives231.48 57.87 4.31 ms 8.41 % 27.52
Load = Very Light
SuperTrak-100 RAID 0 w/ 3 Drives67.72 16.93 59.06 ms 2.66 % 25.46
Escalade 6400 RAID 0 w/ 3 Drives247.71 61.93 16.14 ms 11.32 % 21.88
Load = Light
SuperTrak-100 RAID 0 w/ 3 Drives67.72 16.93 236.27 ms 2.71 % 24.99
Escalade 6400 RAID 0 w/ 3 Drives247.55 61.89 64.62 ms 11.09 % 22.32
Load = Moderate
SuperTrak-100 RAID 0 w/ 3 Drives67.69 16.92 945.45 ms 3.98 % 17.01
Escalade 6400 RAID 0 w/ 3 Drives247.32 61.83 258.73 ms 11.85 % 20.87
Load = Heavy
SuperTrak-100 RAID 0 w/ 3 Drives67.35 16.84 3422.72 ms 3.40 % 19.81
Escalade 6400 RAID 0 w/ 3 Drives246.84 61.71 997.71 ms 14.08 % 17.53

Jump To AnalysisIOMeter - File Server Access PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
SuperTrak-100 RAID 0 w/ 4 Drives68.56 0.75 14.58 ms 0.64 % 107.12
Escalade 6400 RAID 0 w/ 4 Drives74.87 0.81 13.35 ms 0.75 % 99.83
Load = Very Light
SuperTrak-100 RAID 0 w/ 4 Drives68.94 0.75 58.02 ms 0.77 % 89.53
Escalade 6400 RAID 0 w/ 4 Drives169.28 1.82 23.62 ms 1.79 % 94.57
Load = Light
SuperTrak-100 RAID 0 w/ 4 Drives74.25 0.80 215.46 ms 0.86 % 86.34
Escalade 6400 RAID 0 w/ 4 Drives225.49 2.42 70.95 ms 2.44 % 92.41
Load = Moderate
SuperTrak-100 RAID 0 w/ 4 Drives82.63 0.89 774.14 ms 0.99 % 83.46
Escalade 6400 RAID 0 w/ 4 Drives251.12 2.72 254.78 ms 2.64 % 95.12
Load = Heavy
SuperTrak-100 RAID 0 w/ 4 Drives92.92 1.00 2749.10 ms 1.23 % 75.54
Escalade 6400 RAID 0 w/ 4 Drives293.66 3.19 871.55 ms 3.73 % 78.73

Jump To AnalysisIOMeter - Workstation Access PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
SuperTrak-100 RAID 0 w/ 4 Drives79.32 0.62 12.60 ms 0.80 % 99.15
Escalade 6400 RAID 0 w/ 4 Drives87.28 0.68 11.45 ms 0.91 % 95.91
Load = Very Light
SuperTrak-100 RAID 0 w/ 4 Drives79.68 0.62 50.19 ms 0.91 % 87.56
Escalade 6400 RAID 0 w/ 4 Drives188.96 1.48 21.16 ms 1.95 % 96.90
Load = Light
SuperTrak-100 RAID 0 w/ 4 Drives84.65 0.66 189.00 ms 1.02 % 82.99
Escalade 6400 RAID 0 w/ 4 Drives265.44 2.07 60.27 ms 2.78 % 95.48
Load = Moderate
SuperTrak-100 RAID 0 w/ 4 Drives94.29 0.74 678.35 ms 1.05 % 89.80
Escalade 6400 RAID 0 w/ 4 Drives294.50 2.30 217.28 ms 2.98 % 98.83
Load = Heavy
SuperTrak-100 RAID 0 w/ 4 Drives105.75 0.83 2416.25 ms 1.52 % 69.57
Escalade 6400 RAID 0 w/ 4 Drives338.69 2.65 755.61 ms 4.40 % 76.97

Jump To AnalysisIOMeter - Database Access PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
SuperTrak-100 RAID 0 w/ 4 Drives78.94 0.62 12.66 ms 0.82 % 96.27
Escalade 6400 RAID 0 w/ 4 Drives85.71 0.67 11.66 ms 0.89 % 96.30
Load = Very Light
SuperTrak-100 RAID 0 w/ 4 Drives78.94 0.62 50.66 ms 0.94 % 83.98
Escalade 6400 RAID 0 w/ 4 Drives192.41 1.50 20.78 ms 2.07 % 92.95
Load = Light
SuperTrak-100 RAID 0 w/ 4 Drives83.21 0.65 192.23 ms 0.96 % 86.68
Escalade 6400 RAID 0 w/ 4 Drives245.70 1.92 65.11 ms 2.49 % 98.67
Load = Moderate
SuperTrak-100 RAID 0 w/ 4 Drives92.34 0.72 692.69 ms 1.09 % 84.72
Escalade 6400 RAID 0 w/ 4 Drives272.60 2.13 234.74 ms 2.76 % 98.77
Load = Heavy
SuperTrak-100 RAID 0 w/ 4 Drives104.38 0.82 2448.09 ms 1.34 % 77.90
Escalade 6400 RAID 0 w/ 4 Drives308.59 2.41 829.18 ms 3.86 % 79.95

Jump To AnalysisIOMeter - Random Write PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
SuperTrak-100 RAID 0 w/ 4 Drives294.08 2.30 3.40 ms 2.74 % 107.33
Escalade 6400 RAID 0 w/ 4 Drives372.29 2.91 2.68 ms 3.65 % 102.00
Load = Very Light
SuperTrak-100 RAID 0 w/ 4 Drives295.39 2.31 13.54 ms 3.05 % 96.85
Escalade 6400 RAID 0 w/ 4 Drives372.26 2.91 10.74 ms 3.57 % 104.27
Load = Light
SuperTrak-100 RAID 0 w/ 4 Drives205.69 1.61 77.67 ms 2.19 % 93.92
Escalade 6400 RAID 0 w/ 4 Drives374.17 2.92 42.75 ms 3.81 % 98.21
Load = Moderate
SuperTrak-100 RAID 0 w/ 4 Drives42.87 0.33 1484.12 ms 0.46 % 93.20
Escalade 6400 RAID 0 w/ 4 Drives375.69 2.94 170.26 ms 3.85 % 97.58
Load = Heavy
SuperTrak-100 RAID 0 w/ 4 Drives74.84 0.58 3424.11 ms 1.06 % 70.60
Escalade 6400 RAID 0 w/ 4 Drives381.89 2.98 670.05 ms 5.16 % 74.01

Jump To AnalysisIOMeter - Sequential Write PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
SuperTrak-100 RAID 0 w/ 4 Drives2.26 0.56 441.48 ms 0.14 % 16.14
Escalade 6400 RAID 0 w/ 4 Drives231.87 57.97 4.31 ms 8.07 % 28.73
Load = Very Light
SuperTrak-100 RAID 0 w/ 4 Drives7.49 1.87 532.79 ms 0.31 % 24.16
Escalade 6400 RAID 0 w/ 4 Drives248.27 62.07 16.10 ms 11.07 % 22.43
Load = Light
SuperTrak-100 RAID 0 w/ 4 Drives7.47 1.87 2002.53 ms 0.28 % 26.68
Escalade 6400 RAID 0 w/ 4 Drives248.32 62.08 64.42 ms 11.32 % 21.94
Load = Moderate
SuperTrak-100 RAID 0 w/ 4 Drives7.48 1.87 8563.32 ms 0.37 % 20.22
Escalade 6400 RAID 0 w/ 4 Drives248.35 62.09 257.68 ms 11.13 % 22.31
Load = Heavy
SuperTrak-100 RAID 0 w/ 4 Drives7.34 1.84 32893.66 ms 0.44 % 16.68
Escalade 6400 RAID 0 w/ 4 Drives248.07 62.02 991.64 ms 14.40 % 17.23

RAID 0 results are very interesting. The difference between the SuperTrak and Escalade jumps out: the Escalade scales very nicely as more drives are added to the array while the SuperTrak's performance stays about the same.

Each drive added to the Escalade delivers a significant performance increase in Workstation, File Server, Database, and Random Write tests, especially under Heavy loads. It's nothing less than what one would expect from a card capable of load balancing: as more drives are added, I/Os may be distributed over each of them, significantly increasing performance.

That said, we're unclear on why the Sequential Write scores don't improve much after the 3rd or 4th drive is added to the array - the Escalade's sequential write performance tops out at "only" 60MB/sec in these tests.

On the other end of the spectrum, the SuperTrak's Workstation, File Server, and Database scores increase by only a few IO/sec when going from a single drive to a 4-drive RAID 0 array. This is disappointing... the SuperTrak is certainly not inexpensive compared to other ATA RAID solutions; we expected much better performance.

SuperTrak's Random Write performance does increase as more drives are added (the sudden drop-off under Moderate and Heavy loads with a 4-drive array is interesting, though). These improvements are likely due to write caching by both the controller and the drives themselves. Sequential Write performance increases somewhat with two and three-drive arrays, then, for some unknown reason, dramatically drops off in a four-drive configuration.

Confused? The StorageReview.com RAID Guide explains RAID 1!


RAID 1...

Click here to jump to analysis text...

Jump To AnalysisIOMeter - File Server Access PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
SuperTrak-100 RAID 1 w/ 2 Drives71.72 0.77 13.94 ms 0.68 % 105.47
Escalade 6400 RAID 1 w/ 2 Drives74.43 0.80 13.43 ms 0.74 % 100.58
Load = Very Light
SuperTrak-100 RAID 1 w/ 2 Drives73.20 0.79 54.64 ms 0.79 % 92.66
Escalade 6400 RAID 1 w/ 2 Drives116.88 1.27 34.22 ms 1.28 % 91.31
Load = Light
SuperTrak-100 RAID 1 w/ 2 Drives82.40 0.89 194.14 ms 0.92 % 89.57
Escalade 6400 RAID 1 w/ 2 Drives126.98 1.37 125.82 ms 1.49 % 85.22
Load = Moderate
SuperTrak-100 RAID 1 w/ 2 Drives93.28 1.03 685.80 ms 0.98 % 95.18
Escalade 6400 RAID 1 w/ 2 Drives135.37 1.47 472.43 ms 1.51 % 89.65
Load = Heavy
SuperTrak-100 RAID 1 w/ 2 Drives102.37 1.12 2496.30 ms 1.27 % 80.61
Escalade 6400 RAID 1 w/ 2 Drives152.29 1.64 1679.50 ms 1.90 % 80.15

Jump To AnalysisIOMeter - Workstation Access PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
SuperTrak-100 RAID 1 w/ 2 Drives84.87 0.66 11.78 ms 0.83 % 102.25
Escalade 6400 RAID 1 w/ 2 Drives87.17 0.68 11.47 ms 0.85 % 102.55
Load = Very Light
SuperTrak-100 RAID 1 w/ 2 Drives85.90 0.67 46.56 ms 0.97 % 88.56
Escalade 6400 RAID 1 w/ 2 Drives131.54 1.02 30.40 ms 1.45 % 90.72
Load = Light
SuperTrak-100 RAID 1 w/ 2 Drives96.40 0.75 165.95 ms 1.01 % 95.45
Escalade 6400 RAID 1 w/ 2 Drives137.33 1.07 116.50 ms 1.48 % 92.79
Load = Moderate
SuperTrak-100 RAID 1 w/ 2 Drives108.49 0.85 589.64 ms 1.14 % 95.17
Escalade 6400 RAID 1 w/ 2 Drives149.93 1.17 426.68 ms 1.51 % 99.29
Load = Heavy
SuperTrak-100 RAID 1 w/ 2 Drives119.50 0.93 2139.58 ms 1.64 % 72.87
Escalade 6400 RAID 1 w/ 2 Drives169.88 1.32 1505.44 ms 2.42 % 70.20

Jump To AnalysisIOMeter - Database Access PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
SuperTrak-100 RAID 1 w/ 2 Drives78.88 0.62 12.67 ms 0.75 % 105.17
Escalade 6400 RAID 1 w/ 2 Drives81.77 0.64 12.22 ms 0.85 % 96.20
Load = Very Light
SuperTrak-100 RAID 1 w/ 2 Drives80.39 0.63 49.76 ms 0.87 % 92.40
Escalade 6400 RAID 1 w/ 2 Drives109.63 0.86 36.48 ms 1.27 % 86.32
Load = Light
SuperTrak-100 RAID 1 w/ 2 Drives91.59 0.72 174.66 ms 0.99 % 92.52
Escalade 6400 RAID 1 w/ 2 Drives113.54 0.89 140.89 ms 1.26 % 90.11
Load = Moderate
SuperTrak-100 RAID 1 w/ 2 Drives103.86 0.81 615.98 ms 1.11 % 93.57
Escalade 6400 RAID 1 w/ 2 Drives122.89 0.96 520.55 ms 1.28 % 96.01
Load = Heavy
SuperTrak-100 RAID 1 w/ 2 Drives112.91 0.88 2263.55 ms 1.54 % 73.32
Escalade 6400 RAID 1 w/ 2 Drives139.90 1.09 1828.14 ms 1.90 % 73.63

Jump To AnalysisIOMeter - Random Write PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
SuperTrak-100 RAID 1 w/ 2 Drives109.51 0.86 9.13 ms 1.03 % 106.32
Escalade 6400 RAID 1 w/ 2 Drives106.62 0.83 9.37 ms 1.07 % 99.64
Load = Very Light
SuperTrak-100 RAID 1 w/ 2 Drives109.21 0.85 36.62 ms 1.04 % 105.01
Escalade 6400 RAID 1 w/ 2 Drives106.59 0.83 37.52 ms 1.23 % 86.66
Load = Light
SuperTrak-100 RAID 1 w/ 2 Drives109.32 0.85 146.35 ms 1.12 % 97.61
Escalade 6400 RAID 1 w/ 2 Drives106.78 0.83 149.81 ms 1.26 % 84.75
Load = Moderate
SuperTrak-100 RAID 1 w/ 2 Drives111.20 0.87 575.22 ms 1.18 % 94.24
Escalade 6400 RAID 1 w/ 2 Drives107.48 0.84 595.34 ms 1.30 % 82.68
Load = Heavy
SuperTrak-100 RAID 1 w/ 2 Drives119.03 0.93 2147.58 ms 1.65 % 72.14
Escalade 6400 RAID 1 w/ 2 Drives113.70 0.89 2249.21 ms 1.67 % 68.08

Jump To AnalysisIOMeter - Sequential Write PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
SuperTrak-100 RAID 1 w/ 2 Drives41.28 10.32 24.21 ms 1.91 % 21.61
Escalade 6400 RAID 1 w/ 2 Drives110.85 27.71 9.02 ms 3.83 % 28.94
Load = Very Light
SuperTrak-100 RAID 1 w/ 2 Drives41.39 10.35 96.62 ms 1.91 % 21.67
Escalade 6400 RAID 1 w/ 2 Drives110.78 27.70 36.10 ms 4.30 % 25.76
Load = Light
SuperTrak-100 RAID 1 w/ 2 Drives41.39 10.35 386.60 ms 2.04 % 20.29
Escalade 6400 RAID 1 w/ 2 Drives110.83 27.71 144.34 ms 4.26 % 26.02
Load = Moderate
SuperTrak-100 RAID 1 w/ 2 Drives41.37 10.34 1546.60 ms 2.61 % 15.85
Escalade 6400 RAID 1 w/ 2 Drives110.85 27.71 577.32 ms 5.66 % 19.58
Load = Heavy
SuperTrak-100 RAID 1 w/ 2 Drives41.17 10.39 5591.48 ms 2.40 % 17.15
Escalade 6400 RAID 1 w/ 2 Drives109.59 27.40 2245.22 ms 5.83 % 18.80

Although many assume that redundancy is RAID 1's only benefit, the scores presented above illustrate that, depending on the quality of the controller, significant increases in performance may be gained through RAID 1 array relative to a single drive. The reason, as mentioned earlier, is load balancing. Keep in mind that RAID 1 arrays consist of two drives that store the exact same data. As a result, all drives may be utilized simultaneously to service different read requests. Write requests are another matter, however, since each write request must be carried out by all drives to ensure that content remains the same across all disks.

Although the SuperTrak's RAID 1 scores aren't earth shattering, Workstation, File Server, and Database scores post significant improvements over their respective base scores. Random Write performance remains about the same as that of a single drive while Sequential Write performance is somewhat lower.

On the other hand, the Escalade's RAID 1 scores are very impressive. Although they start slightly lower than a single drive under a Linear load, Workstation, File Server, and Database scores skyrocket under Light loads and remain much higher than base scores up through the Heavy load tests. TwinStor in action... we'd be lying if we said that we weren't impressed.

Random Write performance, on the other hand, is much worse than that of a single drive. We're somewhat baffled by the large disparity. While it's expected that RAID 1 random write performance will be equal to or somewhat worse than a single drive, the gap displayed by the Escalade is surprising. Sequential write performance, on the other hand, is virtually exactly the same as a single drive.

Confused? The StorageReview.com RAID Guide explains RAID 01 and RAID 10!


RAID 10/01...

Click here to jump to analysis text...

Jump To AnalysisIOMeter - File Server Access PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
SuperTrak-100 RAID 01 w/ 4 Drives73.05 0.79 13.69 ms 0.68 % 107.43
Escalade 6400 RAID 10 w/ 4 Drives75.01 0.81 13.33 ms 0.74 % 101.36
Load = Very Light
SuperTrak-100 RAID 01 w/ 4 Drives73.30 0.79 54.57 ms 0.86 % 85.23
Escalade 6400 RAID 10 w/ 4 Drives176.90 1.92 22.61 ms 1.88 % 94.10
Load = Light
SuperTrak-100 RAID 01 w/ 4 Drives77.02 0.82 207.69 ms 0.92 % 83.72
Escalade 6400 RAID 10 w/ 4 Drives197.87 2.15 80.85 ms 2.03 % 97.47
Load = Moderate
SuperTrak-100 RAID 01 w/ 4 Drives83.41 0.91 766.79 ms 0.96 % 86.89
Escalade 6400 RAID 10 w/ 4 Drives216.07 2.32 296.09 ms 2.20 % 98.21
Load = Heavy
SuperTrak-100 RAID 01 w/ 4 Drives92.29 1.00 2768.59 ms 1.26 % 73.25
Escalade 6400 RAID 10 w/ 4 Drives247.23 2.68 1034.84 ms 3.22 % 76.78

Jump To AnalysisIOMeter - Workstation Access PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
SuperTrak-100 RAID 01 w/ 4 Drives84.21 0.66 11.87 ms 0.84 % 100.25
Escalade 6400 RAID 10 w/ 4 Drives86.34 0.67 11.58 ms 0.88 % 98.11
Load = Very Light
SuperTrak-100 RAID 01 w/ 4 Drives84.69 0.66 47.23 ms 0.96 % 88.22
Escalade 6400 RAID 10 w/ 4 Drives198.34 1.55 20.16 ms 2.09 % 94.90
Load = Light
SuperTrak-100 RAID 01 w/ 4 Drives87.93 0.69 181.93 ms 1.01 % 87.06
Escalade 6400 RAID 10 w/ 4 Drives228.41 1.78 70.04 ms 2.42 % 94.38
Load = Moderate
SuperTrak-100 RAID 01 w/ 4 Drives94.80 0.74 674.78 ms 1.03 % 92.04
Escalade 6400 RAID 10 w/ 4 Drives249.93 1.95 256.02 ms 2.50 % 99.97
Load = Heavy
SuperTrak-100 RAID 01 w/ 4 Drives104.74 0.82 2440.41 ms 1.47 % 71.25
Escalade 6400 RAID 10 w/ 4 Drives283.44 2.21 902.78 ms 3.66 % 77.44

Jump To AnalysisIOMeter - Database Access PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
SuperTrak-100 RAID 01 w/ 4 Drives82.38 0.64 12.14 ms 0.83 % 99.25
Escalade 6400 RAID 10 w/ 4 Drives81.18 0.63 12.31 ms 0.86 % 94.40
Load = Very Light
SuperTrak-100 RAID 01 w/ 4 Drives79.29 0.62 50.45 ms 0.87 % 91.14
Escalade 6400 RAID 10 w/ 4 Drives178.29 1.39 22.43 ms 1.91 % 93.35
Load = Light
SuperTrak-100 RAID 01 w/ 4 Drives84.57 0.66 189.14 ms 0.98 % 86.30
Escalade 6400 RAID 10 w/ 4 Drives192.74 1.51 83.00 ms 2.01 % 95.89
Load = Moderate
SuperTrak-100 RAID 01 w/ 4 Drives90.44 0.71 707.37 ms 1.03 % 87.81
Escalade 6400 RAID 10 w/ 4 Drives208.61 1.63 306.70 ms 2.23 % 93.55
Load = Heavy
SuperTrak-100 RAID 01 w/ 4 Drives99.87 0.78 2558.74 ms 1.42 % 70.33
Escalade 6400 RAID 10 w/ 4 Drives238.42 1.86 1073.04 ms 3.15 % 75.69

Jump To AnalysisIOMeter - Random Write PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
SuperTrak-100 RAID 01 w/ 4 Drives195.34 1.53 5.12 ms 1.80 % 108.52
Escalade 6400 RAID 10 w/ 4 Drives185.73 1.45 5.38 ms 1.72 % 107.98
Load = Very Light
SuperTrak-100 RAID 01 w/ 4 Drives195.21 1.53 20.49 ms 2.06 % 94.76
Escalade 6400 RAID 10 w/ 4 Drives186.26 1.46 21.47 ms 1.91 % 97.52
Load = Light
SuperTrak-100 RAID 01 w/ 4 Drives196.08 1.53 81.59 ms 2.04 % 96.12
Escalade 6400 RAID 10 w/ 4 Drives186.38 1.46 85.83 ms 2.09 % 89.18
Load = Moderate
SuperTrak-100 RAID 01 w/ 4 Drives197.38 1.54 324.18 ms 2.11 % 93.55
Escalade 6400 RAID 10 w/ 4 Drives187.52 1.46 341.14 ms 1.94 % 96.66
Load = Heavy
SuperTrak-100 RAID 01 w/ 4 Drives205.43 1.60 1245.30 ms 2.92 % 70.35
Escalade 6400 RAID 10 w/ 4 Drives193.09 1.51 1324.80 ms 2.79 % 69.21

Jump To AnalysisIOMeter - Sequential Write PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
SuperTrak-100 RAID 01 w/ 4 Drives48.06 12.02 20.80 ms 2.03 % 23.67
Escalade 6400 RAID 10 w/ 4 Drives121.15 30.29 8.25 ms 4.40 % 27.53
Load = Very Light
SuperTrak-100 RAID 01 w/ 4 Drives48.24 12.06 82.90 ms 1.98 % 24.36
Escalade 6400 RAID 10 w/ 4 Drives125.84 31.46 31.78 ms 5.49 % 22.92
Load = Light
SuperTrak-100 RAID 01 w/ 4 Drives48.26 12.06 331.54 ms 2.02 % 23.89
Escalade 6400 RAID 10 w/ 4 Drives125.90 31.47 127.08 ms 5.52 % 22.81
Load = Moderate
SuperTrak-100 RAID 01 w/ 4 Drives48.24 12.06 1326.58 ms 2.93 % 16.46
Escalade 6400 RAID 10 w/ 4 Drives125.82 31.45 508.61 ms 6.16 % 20.43
Load = Heavy
SuperTrak-100 RAID 01 w/ 4 Drives47.99 12.00 4802.79 ms 2.58 % 18.60
Escalade 6400 RAID 10 w/ 4 Drives125.52 31.38 1959.77 ms 7.47 % 16.80

The SuperTrak's RAID 01 Workstation, File Server, and Database scores place higher than that of a single drive, though not nearly as high as hoped. Random write performance increase significantly, while sequential write performance decreases somewhat.

The Escalade, on the other hand, performs very well in RAID 10. Its Workstation, File Server, and Database performance even tops its 3-drive RAID 0 showing. This again displays TwinStor's effectiveness.

Unsurprisingly, random write performance isn't quite as high, but still much higher than a single drive. Sequential write performance remains about the same, as expected.

Confused? The StorageReview.com RAID Guide explains RAID 5!


RAID 5...

Click here to jump to analysis text...

Jump To AnalysisIOMeter - File Server Access PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
SuperTrak-100 RAID 5 w/ 3 Drives62.12 0.67 16.09 ms 0.76 % 81.74
Escalade 6400 RAID 5 w/ 3 Drives54.82 0.59 18.23 ms 0.69 % 79.45
Load = Very Light
SuperTrak-100 RAID 5 w/ 3 Drives62.44 0.67 64.06 ms 0.70 % 89.20
Escalade 6400 RAID 5 w/ 3 Drives98.35 1.07 40.66 ms 1.41 % 69.75
Load = Light
SuperTrak-100 RAID 5 w/ 3 Drives67.58 0.74 236.74 ms 0.78 % 86.64
Escalade 6400 RAID 5 w/ 3 Drives105.63 1.14 151.40 ms 1.49 % 70.89
Load = Moderate
SuperTrak-100 RAID 5 w/ 3 Drives73.93 0.80 865.24 ms 0.87 % 84.98
Escalade 6400 RAID 5 w/ 3 Drives112.76 1.22 567.62 ms 1.58 % 71.37
Load = Heavy
SuperTrak-100 RAID 5 w/ 3 Drives82.18 0.89 3106.98 ms 1.05 % 78.27
Escalade 6400 RAID 5 w/ 3 Drives138.97 1.50 1840.72 ms 2.37 % 58.64

Jump To AnalysisIOMeter - Workstation Access PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
SuperTrak-100 RAID 5 w/ 3 Drives71.65 0.56 13.95 ms 0.71 % 100.92
Escalade 6400 RAID 5 w/ 3 Drives62.84 0.49 15.91 ms 0.75 % 83.79
Load = Very Light
SuperTrak-100 RAID 5 w/ 3 Drives72.14 0.56 55.44 ms 0.78 % 92.49
Escalade 6400 RAID 5 w/ 3 Drives112.45 0.88 35.56 ms 1.41 % 79.75
Load = Light
SuperTrak-100 RAID 5 w/ 3 Drives76.72 0.60 208.53 ms 0.86 % 89.21
Escalade 6400 RAID 5 w/ 3 Drives119.90 0.94 133.43 ms 1.73 % 69.31
Load = Moderate
SuperTrak-100 RAID 5 w/ 3 Drives83.33 0.65 767.47 ms 0.99 % 84.17
Escalade 6400 RAID 5 w/ 3 Drives127.68 1.00 501.19 ms 1.71 % 74.67
Load = Heavy
SuperTrak-100 RAID 5 w/ 3 Drives92.52 0.72 2762.13 ms 1.34 % 69.04
Escalade 6400 RAID 5 w/ 3 Drives154.92 1.21 1650.93 ms 2.72 % 56.96

Jump To AnalysisIOMeter - Database Access PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
SuperTrak-100 RAID 5 w/ 3 Drives67.07 0.52 14.91 ms 0.72 % 93.15
Escalade 6400 RAID 5 w/ 3 Drives51.04 0.40 19.59 ms 0.74 % 68.97
Load = Very Light
SuperTrak-100 RAID 5 w/ 3 Drives67.01 0.52 59.68 ms 0.74 % 90.55
Escalade 6400 RAID 5 w/ 3 Drives83.21 0.65 48.06 ms 1.17 % 71.12
Load = Light
SuperTrak-100 RAID 5 w/ 3 Drives71.02 0.55 225.26 ms 0.78 % 91.05
Escalade 6400 RAID 5 w/ 3 Drives88.12 0.69 181.52 ms 1.34 % 65.76
Load = Moderate
SuperTrak-100 RAID 5 w/ 3 Drives76.78 0.60 832.94 ms 0.87 % 88.25
Escalade 6400 RAID 5 w/ 3 Drives92.54 0.72 691.97 ms 1.36 % 68.04
Load = Heavy
SuperTrak-100 RAID 5 w/ 3 Drives83.99 0.66 3042.84 ms 1.17 % 71.79
Escalade 6400 RAID 5 w/ 3 Drives113.08 0.88 2261.67 ms 2.06 % 54.89

Jump To AnalysisIOMeter - Random Write PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
SuperTrak-100 RAID 5 w/ 3 Drives73.31 0.57 13.64 ms 0.72 % 101.82
Escalade 6400 RAID 5 w/ 3 Drives35.52 0.28 28.14 ms 0.66 % 53.82
Load = Very Light
SuperTrak-100 RAID 5 w/ 3 Drives73.26 0.57 54.60 ms 0.80 % 91.58
Escalade 6400 RAID 5 w/ 3 Drives42.22 0.33 94.72 ms 0.84 % 50.26
Load = Light
SuperTrak-100 RAID 5 w/ 3 Drives78.00 0.61 205.06 ms 0.95 % 82.11
Escalade 6400 RAID 5 w/ 3 Drives43.10 0.34 371.21 ms 0.94 % 45.85
Load = Moderate
SuperTrak-100 RAID 5 w/ 3 Drives81.60 0.64 783.68 ms 0.89 % 91.69
Escalade 6400 RAID 5 w/ 3 Drives45.90 0.36 1394.71 ms 0.98 % 46.84
Load = Heavy
SuperTrak-100 RAID 5 w/ 3 Drives86.60 0.68 2949.68 ms 1.19 % 72.77
Escalade 6400 RAID 5 w/ 3 Drives55.24 0.43 4625.33 ms 1.41 % 39.18

Jump To AnalysisIOMeter - Sequential Write PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
SuperTrak-100 RAID 5 w/ 3 Drives27.46 6.87 36.40 ms 1.15 % 23.88
Escalade 6400 RAID 5 w/ 3 Drives14.04 3.51 71.19 ms 3.93 % 3.57
Load = Very Light
SuperTrak-100 RAID 5 w/ 3 Drives27.54 6.88 145.25 ms 1.15 % 23.95
Escalade 6400 RAID 5 w/ 3 Drives20.02 5.00 199.82 ms 6.32 % 3.17
Load = Light
SuperTrak-100 RAID 5 w/ 3 Drives27.53 6.88 581.07 ms 1.19 % 23.13
Escalade 6400 RAID 5 w/ 3 Drives18.96 4.74 843.63 ms 98.19 % 0.19
Load = Moderate
SuperTrak-100 RAID 5 w/ 3 Drives27.52 6.88 2324.42 ms 1.72 % 16.00
Escalade 6400 RAID 5 w/ 3 Drives19.10 4.77 3359.09 ms 98.58 % 0.19
Load = Heavy
SuperTrak-100 RAID 5 w/ 3 Drives26.99 6.75 8529.51 ms 1.40 % 19.28
Escalade 6400 RAID 5 w/ 3 Drives19.45 4.86 13313.50 ms 99.27 % 0.20

Jump To AnalysisIOMeter - File Server Access PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
SuperTrak-100 RAID 5 w/ 4 Drives61.04 0.65 16.38 ms 0.68 % 89.76
Escalade 6400 RAID 5 w/ 4 Drives55.75 0.60 17.93 ms 0.73 % 76.37
Load = Very Light
SuperTrak-100 RAID 5 w/ 4 Drives60.88 0.66 65.69 ms 0.65 % 93.66
Escalade 6400 RAID 5 w/ 4 Drives115.98 1.25 34.48 ms 1.60 % 72.49
Load = Light
SuperTrak-100 RAID 5 w/ 4 Drives64.79 0.71 246.83 ms 0.75 % 86.39
Escalade 6400 RAID 5 w/ 4 Drives134.94 1.47 118.55 ms 1.72 % 78.45
Load = Moderate
SuperTrak-100 RAID 5 w/ 4 Drives71.59 0.77 893.39 ms 0.87 % 82.29
Escalade 6400 RAID 5 w/ 4 Drives141.66 1.53 451.68 ms 2.03 % 69.78
Load = Heavy
SuperTrak-100 RAID 5 w/ 4 Drives79.73 0.86 3203.32 ms 1.02 % 78.17
Escalade 6400 RAID 5 w/ 4 Drives175.44 1.89 1458.14 ms 2.85 % 61.56

Jump To AnalysisIOMeter - Workstation Access PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
SuperTrak-100 RAID 5 w/ 4 Drives69.85 0.55 14.31 ms 0.75 % 93.13
Escalade 6400 RAID 5 w/ 4 Drives63.37 0.50 15.78 ms 0.77 % 82.30
Load = Very Light
SuperTrak-100 RAID 5 w/ 4 Drives69.91 0.55 57.21 ms 0.79 % 88.49
Escalade 6400 RAID 5 w/ 4 Drives127.75 1.00 31.31 ms 1.50 % 85.17
Load = Light
SuperTrak-100 RAID 5 w/ 4 Drives73.47 0.57 217.70 ms 0.87 % 84.45
Escalade 6400 RAID 5 w/ 4 Drives154.47 1.21 103.56 ms 1.97 % 78.41
Load = Moderate
SuperTrak-100 RAID 5 w/ 4 Drives81.03 0.63 789.16 ms 0.93 % 87.13
Escalade 6400 RAID 5 w/ 4 Drives161.40 1.26 396.77 ms 2.03 % 79.51
Load = Heavy
SuperTrak-100 RAID 5 w/ 4 Drives89.69 0.70 2847.61 ms 1.19 % 75.37
Escalade 6400 RAID 5 w/ 4 Drives196.46 1.53 1302.13 ms 3.26 % 60.26

Jump To AnalysisIOMeter - Database Access PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
SuperTrak-100 RAID 5 w/ 4 Drives64.01 0.50 15.62 ms 0.77 % 83.13
Escalade 6400 RAID 5 w/ 4 Drives52.35 0.41 19.10 ms 0.77 % 67.99
Load = Very Light
SuperTrak-100 RAID 5 w/ 4 Drives64.25 0.50 62.26 ms 0.81 % 79.32
Escalade 6400 RAID 5 w/ 4 Drives104.08 0.81 38.43 ms 1.42 % 73.30
Load = Light
SuperTrak-100 RAID 5 w/ 4 Drives67.24 0.53 237.91 ms 0.85 % 79.11
Escalade 6400 RAID 5 w/ 4 Drives112.22 0.88 142.47 ms 1.50 % 74.81
Load = Moderate
SuperTrak-100 RAID 5 w/ 4 Drives73.19 0.57 873.73 ms 0.82 % 89.26
Escalade 6400 RAID 5 w/ 4 Drives116.31 0.91 550.18 ms 1.63 % 71.36
Load = Heavy
SuperTrak-100 RAID 5 w/ 4 Drives81.01 0.63 3153.56 ms 1.03 % 78.65
Escalade 6400 RAID 5 w/ 4 Drives142.01 1.11 1800.74 ms 2.61 % 54.41

Jump To AnalysisIOMeter - Random Write PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
SuperTrak-100 RAID 5 w/ 4 Drives65.58 0.51 15.24 ms 0.75 % 87.44
Escalade 6400 RAID 5 w/ 4 Drives37.81 0.30 26.44 ms 0.57 % 66.33
Load = Very Light
SuperTrak-100 RAID 5 w/ 4 Drives66.01 0.52 60.59 ms 0.75 % 88.01
Escalade 6400 RAID 5 w/ 4 Drives53.79 0.42 74.35 ms 0.89 % 60.44
Load = Light
SuperTrak-100 RAID 5 w/ 4 Drives72.48 0.57 220.65 ms 0.82 % 88.39
Escalade 6400 RAID 5 w/ 4 Drives55.00 0.43 290.86 ms 1.06 % 51.89
Load = Moderate
SuperTrak-100 RAID 5 w/ 4 Drives76.85 0.60 832.34 ms 0.89 % 86.35
Escalade 6400 RAID 5 w/ 4 Drives57.61 0.45 1111.87 ms 1.16 % 49.66
Load = Heavy
SuperTrak-100 RAID 5 w/ 4 Drives81.15 0.63 3146.87 ms 1.17 % 69.36
Escalade 6400 RAID 5 w/ 4 Drives69.41 0.54 3682.43 ms 1.64 % 42.32

Jump To AnalysisIOMeter - Sequential Write PatternJump To Analysis
IOMeter TestsIO/secMB/secResponse TimeCPU Util.IO/CPU%
Load = Linear
SuperTrak-100 RAID 5 w/ 4 Drives31.43 7.86 31.80 ms 1.26 % 24.94
Escalade 6400 RAID 5 w/ 4 Drives12.48 3.12 80.11 ms 3.43 % 3.64
Load = Very Light
SuperTrak-100 RAID 5 w/ 4 Drives31.55 7.89 126.77 ms 1.34 % 23.54
Escalade 6400 RAID 5 w/ 4 Drives22.36 5.59 178.85 ms 7.06 % 3.17
Load = Light
SuperTrak-100 RAID 5 w/ 4 Drives31.53 7.88 507.29 ms 1.34 % 23.53
Escalade 6400 RAID 5 w/ 4 Drives21.51 5.38 743.58 ms 98.42 % 0.22
Load = Moderate
SuperTrak-100 RAID 5 w/ 4 Drives31.55 7.89 2028.69 ms 1.99 % 15.85
Escalade 6400 RAID 5 w/ 4 Drives21.58 5.39 2965.20 ms 98.75 % 0.22
Load = Heavy
SuperTrak-100 RAID 5 w/ 4 Drives30.65 7.66 7545.48 ms 1.74 % 17.61
Escalade 6400 RAID 5 w/ 4 Drives21.61 5.40 11725.58 ms 99.31 % 0.22

The SuperTrak's RAID 5 array performs significantly worse than a single drive. We're extremely disappointed with these numbers. It is fair to say at this point that the SuperTrak is not the card of choice for those with high performance in mind.

There's not much else to say on the matter - the SuperTrak's scores speak for themselves.

The Escalade fares better. Although the SuperTrak scores are better in the Workstation, File Server, and Database access patterns under Linear loads, the Escalade stakes out a commanding lead starting with Light loads and never looks back. The Escalade, on average, scores about 100% higher than the SuperTrak under Heavy loads.

The Escalade's write performance scores are nothing to write home about. Sequential writes clock in around 5MB/sec and random writes performance is much worse than that of even a single drive. (The SuperTrak's RAID 5 write performance isn't anything to brag about either, but remains better than the Escalade's nonetheless.)

Although its RAID 5 scores are much better than the SuperTrak's, the Escalade's 4-drive RAID 5 scores are barely better than its 2-drive RAID 0 scores, with scores actually worse in the Database, Sequential Write, and Random Write tests. While the "obvious" reason is the Escalade's lack of cache (which plays a big role in the performance of RAID levels that use parity), it's our opinion that this is simply the result of a larger issue: the Escalade was never meant for RAID 5 in the first place.

RAID 5 support for the Escalade likely came about for one reason: marketing. After the SuperTrak100's release, 3ware realized they had no RAID 5 card to market against the SuperTrak. We believe that 3ware decided to enable RAID 5 functionality on the Escalade 6400 and 6800 via firmware to solve this dilemma ASAP. Though its just a theory, the following points back it up:

  • It's hard to believe that anyone would design RAID 5 card without cache. Cache is crucial to the performance of RAID levels (such as 5) that use parity.
  • It's unlikely that the Escalade would relegate RAID 5 to a firmware-level if support had been planned all along. After all, the card implements RAID 0/1/10 at the hardware level, so RAID 5 would have been in the same boat had it been planned from the beginning.
  • If the Escalade was indeed meant for RAID 5 all along, support should have arrived much sooner, not six months after the controller's introduction.

Considering the firmware implementation, it should be no surprise that Escalade RAID 5 performs much worse than RAID 0. According to a 3ware engineer, all XOR calculations are done by the firmware itself. While the very nature of "firmware RAID" means that some potential performance is lost relative to "true" hardware RAID, one should note that the Escalade's firmware RAID 5 solidly trounces the SuperTrak's hardware implementation. This goes to show that generalizations such as "hardware RAID is superior to firmware RAID" aren't always true - performance depends mostly on the cards themselves.

Confused? The StorageReview.com RAID Guide explains RAID 3!


The SuperTrak's RAID 3 Performance...

The SuperTrak's performance thus far likely disappoints lots of folks. Unlike the Escalade, however, the Promise controller supports RAID 3-4. Let's see how the SuperTrak fares.

Note: Although none of Promise's documents mention RAID 4 support for the SuperTrak, it does indeed support this RAID level. The only difference between RAID 3 and RAID 4 is stripe size: RAID 3 uses very small stripe sizes such as 512 bytes or 1K, while the term RAID 4 generally implies a stripe size of over 1K. Since the SuperTrak allows stripe sizes up to 1MB, one can configure a "RAID 3" array on the SuperTrak with a stripe size over 1K. This, in effect,, is a RAID 4 array.

In the results below, RAID 3 arrays used a stripe size of 1K, while RAID 4 arrays utilize a 64K stripe.

Ziff Davis WinBench 99 under Windows 2000 Professional using NTFS - RAID 3
Benchmark SuperTrack-100 w/ 3 Drives SuperTrack-100 w/ 4 Drives
Business Disk WinMark 99 (KB/sec) 3952 3870
High-End Disk WinMark 99 (KB/sec) 6118 6116
AVS/Express 3.4 (KB/sec) 11020 11500
FrontPage 98 (KB/sec) 51660 52200
MicroStation SE (KB/sec) 15080 15720
Photoshop 4.0 (KB/sec) 2462 2480
Premiere 4.2 (KB/sec) 3806 3850
Sound Forge 4.0 (KB/sec) 6122 6172
Visual C++ (KB/sec) 7394 7494
Disk/Read Transfer RateStorageReview.com
Beginning (KB/sec) 16333 16300
End (KB/sec) 16367 16300
Disk Access Time (ms) 16.12 16.12
Disk CPU Utilization (%) 2.88 2.85

While the Business Disk Winmark scores shown above may indeed represent the SuperTrak's RAID 3 performance, it's very difficult to believe that the High-End Winmark scores are accurate.

As we'll see, a contrast against base, single-drive scores reveals RAID 3 Business Winmark scores trailing by about 20%. IOMeter backs these scores up.

The High-End Winmark scores, however, weigh in at less than half that of base. This again throws up a red flag, and further investigation reveals that the array's drive activity simply stops at various points during the HE test for a minute or more. This activity, while not as severe as it was with RAID 5, still invalidates the results.

Also disappointing is the SuperTrak's apparent sustained transfer rate of 16 MB/sec in RAID 3. The previous 20 MB/sec limit in RAID 0/1/01 was bad enough. One uses RAID 3, after all, when sustained transfer rates are more important. We have a hard time imagining any other situations where RAID 3 would be desirable - it's small stripe size results in poor random I/O performance since the vast majority of I/O requests must be serviced by multiple drives. (For a more detailed explanation of various RAID levels and their respective performance, please see our RAID guide.)

Confused? The StorageReview.com RAID Guide explains RAID 4!


The SuperTrak's RAID 4 Performance...

Ziff Davis WinBench 99 under Windows 2000 Professional using NTFS - RAID 4
Benchmark SuperTrack-100 w/ 3 Drives SuperTrack-100 w/ 4 Drives
Business Disk WinMark 99 (KB/sec) 4010 4048
High-End Disk WinMark 99 (KB/sec) 9468 9510
AVS/Express 3.4 (KB/sec) 11520 11640
FrontPage 98 (KB/sec) 55040 55580
MicroStation SE (KB/sec) 14880 14880
Photoshop 4.0 (KB/sec) 4180 4164
Premiere 4.2 (KB/sec) 6792 6906
Sound Forge 4.0 (KB/sec) 12340 12460
Visual C++ (KB/sec) 10104 10024
Disk/Read Transfer RateStorageReview.com
Beginning (KB/sec) 19767 19767
End (KB/sec) 19800 19800
Disk Access Time (ms) 16.22 16.18
Disk CPU Utilization (%) 2.89 2.94

Although RAID 4 yields better results, they're still quite disappointing. The High-End Disk Winmark issue mentioned above was present, as evidenced by the low High-End scores. Thus, we cannot use the High-End scores to judge the SuperTrak's RAID 4 performance.

Once again, however, the Business Diskmark scores are likely representative - and they continue to disappoint. We expected much more out of this card.

WinBench Dilemna...

Considering the multitude of problems with the WinBench Disk Winmark scores presented here, one may wonder why we bothered publishing them at all. Fact is, we seriously considered omission. However, this likely would not have gone over too well with most readers; as a result, we decided to publish while explaining why we're absolutely sure that they're unrepresentative. We then fell back on IOMeter to deliver an accurate performance measurement.

At this point, it's impossible to know what caused WinBench to act so strangely. It may be an issue with the cards, with WinBench, or perhaps both. WinBench may have serious issues benchmarking RAID arrays (especially ones with parity), but we've heard no complaints of such problems in the past. It may very well be these controllers; however, it seems unlikely that both cards would have such a similar issue with WinBench. Whatever the issue, keep in mind that these problems were repeatable on two completely different systems.

Confused? The StorageReview.com RAID Guide explains RAID 3!


The SuperTrak's RAID 3 and RAID 4 IOMeter performance...

Again, Promise assured us that our SuperTrak IOMeter results were representative of the card's performance. So with that in mind, let's take a look at the SuperTrak's RAID 3-4 IOMeter scores...

RAID 3...

Click here to jump to analysis text...

 Analysis Intel IOMeter - SuperTrak-100 RAID 3 w/ 3 Drives Analysis 
1 worker, 10 minutes, 30 second rampup
All LoadsLinearVery LightLightModerateHeavy
File Server Access PatternStorageReview.com
Total I/Os per second50.8351.4957.0061.5063.70
Total MBs per second0.560.550.620.660.69
Average I/O Response Time (ms)19.6777.64280.621039.514008.22
CPU Utilization (%)0.470.570.660.730.95
I/Os per % CPU Utilization108.1590.3386.3684.2567.05
Workstation Access PatternStorageReview.com
Total I/Os per second54.6655.9462.4666.3971.00
Total MBs per second0.430.440.490.520.55
Average I/O Response Time (ms)18.2971.49256.13962.843599.32
CPU Utilization (%)0.570.690.730.780.94
I/Os per % CPU Utilization95.8981.0785.5685.1275.53
Database Access PatternStorageReview.com
Total I/Os per second47.3347.8351.8251.8660.34
Total MBs per second0.370.370.400.430.47
Average I/O Response Time (ms)21.1283.60308.571165.194228.65
CPU Utilization (%)0.490.610.630.670.90
I/Os per % CPU Utilization96.5978.4182.2577.4067.04
Random Write PatternStorageReview.com
Total I/Os per second41.2342.5346.2148.8652.75
Total MBs per second0.320.330.360.380.41
Average I/O Response Time (ms)24.2594.04346.151308.474838.19
CPU Utilization (%)0.500.540.520.550.73
I/Os per % CPU Utilization82.4678.7688.8788.8472.26
Sequential Write PatternStorageReview.com
Total I/Os per second10.6810.7110.7110.7110.57
Total MBs per second2.672.682.682.682.64
Average I/O Response Time (ms)93.58373.481493.975977.9621690.53
CPU Utilization (%)0.460.430.480.580.56
I/Os per % CPU Utilization23.2224.9122.3118.4718.88

 Analysis Intel IOMeter - SuperTrak-100 RAID 3 w/ 4 Drives Analysis 
1 worker, 10 minutes, 30 second rampup
All LoadsLinearVery LightLightModerateHeavy
File Server Access PatternStorageReview.com
Total I/Os per second46.1746.9350.8653.3058.82
Total MBs per second0.500.500.550.570.64
Average I/O Response Time (ms)21.6686.22314.441198.864340.76
CPU Utilization (%)0.440.540.610.600.82
I/Os per % CPU Utilization104.9386.9183.3888.8371.73
Workstation Access PatternStorageReview.com
Total I/Os per second49.3349.9154.3158.4461.06
Total MBs per second0.390.390.420.460.48
Average I/O Response Time (ms)20.2780.13294.481093.774177.49
CPU Utilization (%)0.540.580.650.650.80
I/Os per % CPU Utilization91.3586.0583.5589.9176.33
Database Access PatternStorageReview.com
Total I/Os per second41.7542.5545.0047.6852.32
Total MBs per second0.330.330.350.370.41
Average I/O Response Time (ms)23.9593.97355.411340.094871.62
CPU Utilization (%)0.510.560.520.600.73
I/Os per % CPU Utilization81.8675.9886.5479.4771.67
Random Write PatternStorageReview.com
Total I/Os per second31.9832.3135.6038.3741.60
Total MBs per second0.250.250.280.300.32
Average I/O Response Time (ms)31.26123.75449.271665.336128.76
CPU Utilization (%)0.390.350.420.460.65
I/Os per % CPU Utilization82.0092.3184.7683.4164.00
Sequential Write PatternStorageReview.com
Total I/Os per second4.654.664.644.674.58
Total MBs per second1.161.161.161.171.14
Average I/O Response Time (ms)214.94858.293450.5513705.4249462.28
CPU Utilization (%)0.230.170.260.290.29
I/Os per % CPU Utilization20.2227.4117.8516.1015.79

Although RAID 3 isn't intended random I/O situations, the SuperTrak's RAID 3 performance is simply awful. These results are much, much worse than that of a single drive.

Confused? The StorageReview.com RAID Guide explains RAID 4!


RAID 4...

Click here to jump to analysis text...

 Analysis Intel IOMeter - SuperTrak-100 RAID 4 w/ 3 Drives Analysis 
1 worker, 10 minutes, 30 second rampup
All LoadsLinearVery LightLightModerateHeavy
File Server Access PatternStorageReview.com
Total I/Os per second60.2560.5366.7473.0680.46
Total MBs per second0.640.650.730.790.87
Average I/O Response Time (ms)16.5966.08239.72875.433172.52
CPU Utilization (%)0.550.680.780.811.14
I/Os per % CPU Utilization109.5589.0185.5690.2070.58
Workstation Access PatternStorageReview.com
Total I/Os per second69.2269.4976.1283.3491.64
Total MBs per second0.540.540.590.650.72
Average I/O Response Time (ms)14.4457.55210.18767.462788.70
CPU Utilization (%)0.710.790.850.881.32
I/Os per % CPU Utilization97.4987.9689.5594.7069.42
Database Access PatternStorageReview.com
Total I/Os per second63.0763.6568.6574.1780.57
Total MBs per second0.490.500.540.580.63
Average I/O Response Time (ms)15.8562.84232.99862.253168.00
CPU Utilization (%)0.670.700.790.911.16
I/Os per % CPU Utilization94.1390.9386.9081.5169.46
Random Write PatternStorageReview.com
Total I/Os per second64.6265.3771.6576.3281.08
Total MBs per second0.500.510.560.600.63
Average I/O Response Time (ms)15.4761.17223.21838.113150.54
CPU Utilization (%)0.680.700.800.821.14
I/Os per % CPU Utilization95.0393.3989.5693.0771.12
Sequential Write PatternStorageReview.com
Total I/Os per second27.4727.5527.5527.5527.01
Total MBs per second6.876.896.896.896.75
Average I/O Response Time (ms)36.40145.17580.732323.088538.92
CPU Utilization (%)1.131.111.201.781.49
I/Os per % CPU Utilization24.3124.8222.9615.4818.13

 Analysis Intel IOMeter - SuperTrak-100 RAID 4 w/ 4 Drives Analysis 
1 worker, 10 minutes, 30 second rampup
All LoadsLinearVery LightLightModerateHeavy
File Server Access PatternStorageReview.com
Total I/Os per second59.4359.6864.7371.6380.27
Total MBs per second0.640.650.700.770.87
Average I/O Response Time (ms)16.8267.01247.11893.083181.62
CPU Utilization (%)0.550.700.790.851.20
I/Os per % CPU Utilization108.0585.2681.9484.2766.89
Workstation Access PatternStorageReview.com
Total I/Os per second68.0568.3573.3280.8989.83
Total MBs per second0.530.530.570.630.70
Average I/O Response Time (ms)14.6958.51218.14790.542843.64
CPU Utilization (%)0.690.770.890.881.18
I/Os per % CPU Utilization98.6288.7782.3891.9276.13
Database Access PatternStorageReview.com
Total I/Os per second61.9261.8766.2972.1680.09
Total MBs per second0.480.480.520.560.63
Average I/O Response Time (ms)16.1564.65241.34886.313189.88
CPU Utilization (%)0.670.670.790.841.10
I/Os per % CPU Utilization92.4292.3483.9185.9072.81
Random Write PatternStorageReview.com
Total I/Os per second60.1861.9168.6273.8578.38
Total MBs per second0.470.480.540.580.61
Average I/O Response Time (ms)16.6164.61233.07866.133259.21
CPU Utilization (%)0.580.690.730.891.05
I/Os per % CPU Utilization103.7689.7294.0082.9874.65
Sequential Write PatternStorageReview.com
Total I/Os per second31.4531.5531.5631.5630.67
Total MBs per second7.867.897.897.897.67
Average I/O Response Time (ms)31.79126.77506.882027.807520.44
CPU Utilization (%)1.261.231.311.691.53
I/Os per % CPU Utilization24.9625.6524.0918.6720.05

While the SuperTrak's RAID 4 scores improve over RAID 3, results are still significantly worse than that of a single drive. There's virtually no performance improvement when moving from three to four drives... something that should surprise no one by this point.

Confused? The StorageReview.com RAID Guide explains all!


Conclusion...

From a performance standpoint, the SuperTrak is a huge disappointment. Given both the card's price and extensive feature set, we expected much, much better performance. What we found, however, was performance significantly worse than that of a single drive in RAID levels 3-5. In RAID levels 0/1/01, scores improved, though not nearly as much as expected.

It goes without saying that the SuperTrak doesn't live up to a lot of people's expectations. We sought some comments from Promise on the issue. Billy Harrison, a Test Engineer at Promise, was kind enough to discuss the SuperTrak's performance as well as answer some questions about Promise's future ATA RAID cards.

When asked about the SuperTrak's lackluster performance, Billy pointed out that there is heavy overhead involved with the use of the I2O architecture, and that the 80960RD Processor on the card is "now slow in comparison to Intel's Latest and Greatest." He also said that Promise had compared the SuperTrak to an AMI MegaRAID SCSI RAID controller, and RAID 5 results for both cards were lower than that of a single drive. (Note, however, that we cannot verify these results.) He then added that the SuperTrak is not for those who are concerned with speed; the FastTrak would be a better choice in this case. According to Harrison, "The ST100 is positioned as a mass storage, fault-tolerant device, for someone more concerned about protecting their data from a hard disk failure than with blindingly fast speed."

That being the case, we wanted to know if Promise had a performance-oriented RAID 5 card on the horizon. As it turns out, they do indeed have a "new and improved" version of the SuperTrak-100 on the horizon. It's called the SuperTrak-100SX6, and here's what Harrison says about it:

"The SuperTrak100SX6 is a 6 Channel 32 bit 33 MHZ RAID 5 Caching Controller that will feature Intel's 80960RM I/O Processor. The 80960RM boasts XOR Accelerator Software, 100 MHZ Core Speed, and 66 MHz Memory Speed. The use of the 80960 allows Promise to move the SuperTrak100 from 72 pin EDO Fast page memory to 168 pin SDRAM with ECC memory checking. The end result will be a new and improved SuperTrak100."

Also on the horizon is the FastTrak100SX4:

"The FastTrak100 TX4 will be the worlds first 4 channel 32 bit 66 MHZ ATA RAID Controller. The Controller features 4 Master Channels that result in Instant Throughput from each and every drive in an Array. For instance, we have achieved 140 Megabyte Sequential Reads (64 kb data transfers) and 105 Megabyte Sequential Writes (64 kb data transfers) from a RAID 0 Array that has been configured with 4 IBM U100 (Telesto Series 15 gig platter) Drives running on a 66 MHZ PCI BUS using Intel's IOMeter 1999.10.20".

Expect the FastTrak-100TX4 in late March or April, with the SuperTrak-100SX6 following sometime in late Q2. We hope to evaluate both of these cards and look forward to any performance increases that they may bring.

On the other end of the spectrum, the Escalade 6400's performance is very impressive. In addition to delivering a staggering 103 MB/sec of sequential performance, the Escalade features nicely scaling random I/O performance. TwinStor, in addition, delivered nice, unconventional performance increases- the performance of mirrored arrays was equally remarkable. Overall, we're quite impressed with the 6400.

The Escalade's performance should put to rest the false impression many folks carry about ATA RAID; that it provides no real-world performance increase. The Escalade's performance proves this notion false.

Thus ends StorageReview.com's first ATA RAID review in over 2 years. We're aware that these reviews are long overdue and appreciate everyone's patience. There's much more coming soon, both ATA, and SCSI. Stay tuned...

[an error occurred while processing the directive]

HOME | ARTICLES | LEADERBOARD | PERFORMANCE DATABASE | REFERENCE GUIDE
COMMUNITY | RELIABILITY SURVEY | SUPPORT SR! | ABOUT SR |

Copyright © 1998-2005 StorageReview.com, Inc. All rights reserved.
Write: Webmaster