January 3rd, 2019 by Marshall Gunnell
Tyan Transport SX TN70A-B8026 Review
The TYAN Transport SX TN70A-B8026 is a 2U rackmount server, based on the Tomcat SX single-socket AMD EPYC motherboard, that’s designed for SME’s and is ideal for real-time analytics, video streaming, software defined storage, in-memory databases, and big data applications.
On the hardware side, the Transport SX comes with 24 hot-swappable 2.5” NVMe drive bays and contains two internal 2.5” SATA drive bays. Tyan also proclaims outstanding performance for a single CPU socket, thanks to the AMD EPYC processor. The 14nm CPU supports 8, 16,24, or 32-core processors with up to 64 threads, has support for up to 2TB total RAM per socket on all CPU models, memory speeds up to DDR4-2667, and 128 PCIe lanes. The Transport also features a LAN Mezzanine that supports network speeds of up to 100GbE. Also supported, should additional expansion be required, is an HHHL PCIe x8 card.
TYAN Transport SX TN70AB8026 Specifications
|Form Factor||2U Rackmount|
|Q’ty / Socket Type||(1) AMD Socket SP3|
|Supported CPU Series||(1) AMD EPYC 7000 Series Processor|
|Average CPU Power (ACP) Wattage||Max up to 180W|
|Supported DIMM Qty||(16) DIMM Slots|
|DIMM Type / Speed||DDR4 ECC RDIMM/LRDIMM/NVDIMM 2667|
|Capacity||Up to 1,024GB RDIMM/LRDIMM|
|Memory Channel||8 Channels per CPU|
|External Drive Bay||Q’ty / Type: (24) 2.5” How-Swap NVMe
HDD Backplane Support: SAS 12Gb/s /SATA 6Gb/s / NVMe
Supported HDD Interface: (24) 2.5” NVMe
|Internal Drive Bay||Type / Q’ty: (2) 2.5” fixed HDD/SSDs
Supported HDD Interface: (2) SATA 6Gb/s
|I/O Ports||USB: (3) USB3.0 ports (2 at rear, 1 TYPE-A) / (2) USB2.0 ports (2 at front)
COM: (1) DB-9 port (COM1) + (1) header (COM2)
VGA: (1) D-Sub 15-pin port
RJ-45: (2) GbE ports, (1) GbE dedicated for IPMI
|Graphic||Connector type: D-Sub 15-pin
Resolution: Up to 1920x1200
Chipset: Aspeed AST2500
|BIOS||Brand / ROM size: AMI / 32MB
Feature: Hardware Monitor / Boot from USB device/PXE via LAN/Storage / User Configurable FAN PVM Dute Cycle / Console Redirection / ACPI 6.1 / SMBIOS 3.1/PnP/Wake on LAN / ACPI sleeping states S5
|PCI-E||(1) PCI-E Gen3 x8 slot (HH / HL w/ tall bracket)|
|Pre-install TYAN Riser Card||(1) M7106-L24-3F riser card for (1) PCI-E Gen3 x16 slot + (2) PCI-E Gen3 x8 slots / (1) M7106-R24-3F riser card for (1) PCI-E Gen3 x16 slot + (2) PCI-E Gen3 x8 slots|
|Pre-install TYAN Mezz Card||(2) M2093 storage mezz. cards w/ (4) PCI-E x8 SFF-8611 OCutLink connectors for (8) NVMe ports|
|Others||(1) PCI-E Gen3 x16 OCP 2.0 slots (conn.A+conn.B)|
|Onboard Chipset||Onboard Aspeed AST2500|
|AST2500 iKVM Feature||24-bit high quality video compression / Supports storage over IP and remote platform-flash / USB 2.0 virtual hub|
|AST2500 IPMI Feature||IPMI 2.0 compliant baseboard management controller (BMC) / 10/100/1000 Mb/s MAC interface|
|Power Supply||Type: RPSU
Input Range: AC 100-127V/10A / AC 200-240V/5A
Frequency: 50-60 Hz
Output Watts: 770 Watts
Efficiency: 80 plus Platinum
|Fan||(8) 6cm fans|
|Operating Temp.||10° C ~ 35° C (50°F ~ 95° F)|
|Non-operating Temp||-40° C ~ 70° C (-40° F ~ 158° F)|
|In/Non-operating Humidity||90%, non-condensing at 35° C|
|RoHS 6/6 Compliant||Yes|
|Dimension (D x W x H)||27.56” x 17.72” x 3.43” (700 x 450 x 87mm)|
|Gross Weight||30 kg (66 lbs)|
|Net Weight||19 kg (42 lbs)|
Design and Build
The TYAN Transport SX TN70A0-B8026 is a 2U form factor rackmount server with 24 NVMe bays running along the front of the unit. On the left-hand side are two USB 2.0 Ports and, on the right, you’ll find the power, ID, and reset buttons, along with ID and IPMI indicator lights.
On the back of the unit on the left-hand side, you’ll find the dual power supply outlets. Next to those, you’ll see two USB 3.0 ports, a LAN port, VGA and Serial port, and two 1GbE networking ports. With all but one PCIe slot consumed with PCIe breakout cards serving the front-mount NVMe bays, this server isn't designed for a ton of external conenctivity. In this case we populated the single slot with a dual-port 10G SFP+ networking card.
Removing the top panel provides access to the two internal 2.5” drive bays that are typically used for boot duties. You’ll also find server board, RAM, and eight 6cm fans.
SQL Server Performance
StorageReview’s Microsoft SQL Server OLTP testing protocol employs the current draft of the Transaction Processing Performance Council’s Benchmark C (TPC-C), an online transaction processing benchmark that simulates the activities found in complex application environments. The TPC-C benchmark comes closer than synthetic performance benchmarks to gauging the performance strengths and bottlenecks of storage infrastructure in database environments.
Each SQL Server VM is configured with two vDisks: 100GB volume for boot and a 500GB volume for the database and log files. From a system resource perspective, we configured each VM with 16 vCPUs, 64GB of DRAM and leveraged the LSI Logic SAS SCSI controller. While our Sysbench workloads tested previously saturated the platform in both storage I/O and capacity, the SQL test looks for latency performance.
This test uses SQL Server 2014 running on Windows Server 2012 R2 guest VMs, and is stressed by Dell's Benchmark Factory for Databases. While our traditional usage of this benchmark has been to test large 3,000-scale databases on local or shared storage, in this iteration we focus on spreading out four 1,500-scale databases evenly across our servers.
SQL Server Testing Configuration (per VM)
- Windows Server 2012 R2
- Storage Footprint: 600GB allocated, 500GB used
- SQL Server 2014
- Database Size: 1,500 scale
- Virtual Client Load: 15,000
- RAM Buffer: 48GB
- Test Length: 3 hours
- 2.5 hours preconditioning
- 30 minutes sample period
For our transactional SQL Server benchmark, the TYAN Transport SX was able to hit an aggregate score of 12,477.5 TPS with individual VMs running from 3,090.8 TPS to 3,152.6 TPS.
A more telling sign of SQL Server performance is latency. With SQL Server average latency, the Transport SX hit an aggregate score of 65.5ms with individual VMs running anywhere from 14ms to 110ms.
Sysbench MySQL Performance
Our first local-storage application benchmark consists of a Percona MySQL OLTP database measured via SysBench. This test measures average TPS (Transactions Per Second), average latency, and average 99th percentile latency as well.
Each Sysbench VM is configured with three vDisks: one for boot (~92GB), one with the pre-built database (~447GB), and the third for the database under test (270GB). From a system resource perspective, we configured each VM with 16 vCPUs, 60GB of DRAM and leveraged the LSI Logic SAS SCSI controller.
Sysbench Testing Configuration (per VM)
- CentOS 6.3 64-bit
- Percona XtraDB 5.5.30-rel30.1
- Database Tables: 100
- Database Size: 10,000,000
- Database Threads: 32
- RAM Buffer: 24GB
- Test Length: 3 hours
- 2 hours preconditioning 32 threads
- 1 hour 32 threads
With the Sysbench OLTP, we look at the 4VM configuration for each. The Transport SX had an aggregate score of 5,778.42 TPS with individual VMs ranging from 1,331.56 TPS to 1,556.22 TPS.
For Sysbench average latency, the Transport SX had an aggregate score of 22.215ms with individual VMs ranging from 20.56ms to 24.03ms.
When it comes to worst-case scenario (99th percentile), the Tyan had an aggregate score of 55.74ms with individual VMs running fomr 49.91ms to 59.26ms.
VDBench Workload Analysis
When it comes to benchmarking storage arrays, application testing is best, and synthetic testing comes in second place. While not a perfect representation of actual workloads, synthetic tests do help to baseline storage devices with a repeatability factor that makes it easy to do apples-to-apples comparison between competing solutions. These workloads offer a range of different testing profiles ranging from "four corners" tests, common database transfer size tests, as well as trace captures from different VDI environments. All of these tests leverage the common vdBench workload generator, with a scripting engine to automate and capture results over a large compute testing cluster. This allows us to repeat the same workloads across a wide range of storage devices, including flash arrays and individual storage devices.
- 4K Random Read: 100% Read, 128 threads, 0-120% iorate
- 4K Random Write: 100% Write, 64 threads, 0-120% iorate
- 64K Sequential Read: 100% Read, 16 threads, 0-120% iorate
- 64K Sequential Write: 100% Write, 8 threads, 0-120% iorate
- Synthetic Database: SQL and Oracle
- VDI Full Clone and Linked Clone Traces
With 4K random read, the Transport SX started off with a latency of 114.5μs at 514,417.17 IOPS, staying under 150μs until about 2,057,000 IOPS, and went on to peak at 3,791,190 IOPS with a latency of 196.9μs.
For 4K random write, the Transport SX started off strong (compared to 4K read) with a latency of 40.6μs at 204,782 IOPS, but quickly rising to 120μs by 1.2mil IOPS. It then went on to peak at 2,097,767 IOPS with a latency of 113.6μs.
Next, we look at sequential workloads with 64K. For read, the Transport SX started off at 3,231MB/s with a latency of 224.1μs, and saw a steady and constant increase until peaking at 32,046MB/s with a latency of 366.2μs.
For 64K sequential write, the Transport SX started off at 1,867MB/s and ran the 80μs line until nearly the end of the test when it finally broke 90μs at 18,645MB/s. It saw a very sharp spike at the end, peaking at 18,698MB/s at a 178.1μs latency level.
Next up is our SQL workloads with the Transport SX started off at 250K IOPS at a latency of 122μs, and saw a slow and steady increase throughout, peaking at 2,448,813 IOPS with a 151.8μs latency.
For SQL 90-10, the Transport SX started off 180K IOPS with a latency of 117.1μs, and slowly climbed up to 1.96mil IOPS at a 162.8μs latency. Here, it took a sharp turn back, finishing behind at 1,695,111 IOPS at a 169.2μs latency.
SQL 80-20 saw the Transport SX start at 161,214 IOPS at a 110.4μs latency and peaking at 1,268,447 IOPS with a 180.7μs latency.
Following our SQL workloads is our Oracle workloads. Here, the Transport SX started off at 110,863 IOPS with a latency of 111.3μs with a steady increase until around 1mil IOPS. The Transport SX peaked at 1,052,446 IOPS with a latency of 169μs.
Oracle 90-10 saw the Transport SX kick off the test with 181,197 IOPS at a 117.1μs latency, slowly climbing to its peak with 1,789,282 IOPS at a latency of 142.8μs.
With Oracle 80-20, the Transport SX told a similar story as the Oracle 90-10 test, kicking off with 175,337 IOPS at a latency of 110μs and peaking at 147.7μs with 1,700,667 IOPS.
Next, we look at VDI clone tests, Full and Linked. Our Full Clone tests include Boot, Initial Login, Monday Login, Patch Update, and Tuesday Steady, while our Linked Cloned tests include Boot, Initial Login, Monday Login, and Tuesday Login. First, we’ll be looking at our Full Clone tests.
For the VDI Boot, the Transport SX started off with 142,582 IOPS at a latency of 127.9μs. The slow-and-constant trend continued with this test, with it finishing at 1,384,133 IOPS at a 208.3μs latency.
VDI FC Initial Login saw the Transport SX kick off with 67,581 IOPS at a latency of 98.4μs. Once it hit around 472K IOPS, there was a steep spike in latency, rising almost 95μs over the next 50K IOPS. It peaked at roughly 588K IOPS with a latency of 253.9μs.
For VDI Monday Login, the Transport SX started off with 58,894 IOPS at a latency of 115.2μs and rose steadily throughout the test. At the end of the test, we see it kick backwards and forwards a bit, ending with roughly 600K IOPS at a 265.4μs latency.
Moving on to VDI Linked Clone (LC), the boot test had the Transport SX begin with 92,621 IOPS at a 150.4μs latency, rising a bit over 150μs throughout the test, finishing at a 201.9μs latency with 925,069 IOPS.
VDI LC Initial Login showing the Transport start off with 40,477 IOPS at a 120.3μs latency with a straight increase and a small hook at the finish, showing roughly 400K IOPS and a 229.5μs latency.
VDI LC Monday Login had the Transport SX at 51,113 IOPS and a latency of 132μs at kick-off, with another straight increase towards the finish, peaking at roughly 540K IOPS at a 326.6μs latency level.
The TYAN Transport SX TN70A-B8026 is a 2U server that comes with 24 2.5” NVMe drive bays, with two additional 2.5” SATA bays inside the unit. This TYAN server supports the AMD EPYC 7000 series processor and boasts an extremely high memory capacity (for a single socket server) of up to 2TB of RAM with every processor in the AMD EPYC SKU stack.
In our Application Workload Analysis, the Transport SX hit an aggregate score of 5,778.42 TPS with an average latency of 65.5ms in SQL Server. In Sysbench, the Transport SX showed an average transactional performance of 5,778.42 TPS and an average latency of 22.215ms with four VMs. Finally, our Sysbench worst-case scenario latency showed 55.7ms with four VMs.
Our VDBench workloads showed incredible CPU-topping performance from the Transport SX. The server hit 3.7MIOPS in random 4K read, 2.1M IOPS in random 4K write, 32GB/s in 64K sequential read, and 18.6GB/s in 64K sequential write. For our SQL test the Transport SX hit 2.44M IOPS, 1.69M IOPS in 90-10, and 1.26M IOPS in 80-20.Oracle tests also showed nice performance with 1.05M IOPS, 1.78M IOPS in 90-10, and 1.7M IOPS in 80-20. The Transport also had nice VDI clone boots with 1.38M IOPS in Full and 925K IOPS in Linked.
Tthe system is well put together and gives those in the AMD universe an interesting chassis that can house 24 NVMe drives. For workloads that need a lot of bandwidth, this configuration may be compelling, epsecially given the savings the single processor can offer in build and software licensing cost. The tradeoff of course is by only having one x8 PCIe slot available, external connectivity is limited to 6.4GB/s. Regardless, the Transport SX is no-doubt an interesting platform with plenty of potential in the right usecase.