Home Enterprise GIGABYTE R281-NO0 NVMe Server Review

GIGABYTE R281-NO0 NVMe Server Review

by Adam Armstrong

The GIGABYTE R281-NO0 is a 2U all-NVMe server that is built around Intel’s second generation of Xeon Scalable processors with a focus on performance-based workloads. With the support of 2nd gen Intel Xeon Scalable comes the support of Intel Optane DC Persistent Memory modules. Optane PMEM can bring a much larger memory footprint as, while the modules aren’t as high-performant as DRAM, they come in much higher capacities. Leveraging Optane can help unleash the full potential of the 2nd gen Intel Xeon Scalable Processors in the GIGABYTE R281-NO0.

The GIGABYTE R281-NO0 is a 2U all-NVMe server that is built around Intel’s second generation of Xeon Scalable processors with a focus on performance-based workloads. With the support of 2nd gen Intel Xeon Scalable comes the support of Intel Optane DC Persistent Memory modules. Optane PMEM can bring a much larger memory footprint as, while the modules aren’t as high-performant as DRAM, they come in much higher capacities. Leveraging Optane can help unleash the full potential of the 2nd gen Intel Xeon Scalable Processors in the GIGABYTE R281-NO0.

 

Other interesting hardware layouts of the GIGABYTE R281-NO0 include up to 12 DIMMs per socket or 24 total. The newer CPUs allow for DRAM up to 2933MHz. In total, users can outfit the GIGABYTE R281-NO0 with up to 3TB of DRAM. The server can leverage several different riser cards giving it up to six full height half-length slots for devices that can leverage PCIe x16 slots or under. The company boasts of having a very dense add-on slot design with several configurations for different use cases. The server has a modularized backplane that is able to support exchangeable expanders offering both SAS and NVMe U.2 (or a combination) depending on needs.

With storage, not only can users add a lot, they can add a lot of NVMe storage in the form of U.2 and AIC. Across the front of the server are 24 drive bays that support 2.5” HDD or SSD and supports NVMe. The rear of the server has two more 2.5” drive bays for SATA/SAS boot/logging drives. And there are tons of PCIe expansion lots for various PCIe devices, including more storage. This density and performance are ideal for AI and HPC optimized for GPU density, multi-node servers optimized for HCI, and storage servers optimized for HDD / SSD capacity.

For those interested, we have a video overview here:

For power management, the GIGABYTE R281-NO0 has two PSUs, which is not uncommon at all. However, it also has intelligent power management features to both make the server more efficient in terms of power usage and retain power in the case of a failure. The server comes with a feature known as Cold Redundancy that switches the extra PSU to standby mode with the system load is under 40%, saving power costs. The system has SCMP (Smart Crisis Management / Protection). With SCMP, if there is an issue with one PSU only two nodes will do to lower power mode while the PSU is repaired/replaced.

GIGABYTE R281-NO0 Specifications

Form Factor 2U
Motherboard MR91-FS0
CPU 2nd Generation Intel Xeon Scalable and Intel Xeon Scalable Processors
Intel Xeon Platinum Processor, Intel Xeon Gold Processor, Intel Xeon Silver Processor and Intel Xeon Bronze Processor
CPU TDP up to 205W
Socket 2x LGA 3647, Socket P
Chipset Intel C621
Memory 24 x DIMM slots
RDIMM modules up to 64GB supported
LRDIMM modules up to 128GB supported
Supports Intel Optane DC Persistent Memory (DCPMM)
​1.2V modules: 2933 (1DPC)/2666/2400/2133 MHz
Storage
Bays Front side: 24 x 2.5″ U.2 hot-swappable NVMe SSD bays
​Rear side: 2 x 2.5″ SATA/SAS hot-swappable HDD/SSD bays
Drive Type SATA III 6Gb/s
​SAS with an add-on SAS Card
RAID For SATA drives: Intel SATA RAID 0/1
​For U.2 drives: Intel Virtual RAID On CPU (VROC) RAID 0, 1, 10, 5
LAN 2 x 1Gb/s LAN ports (Intel I350-AM2)
​1 x 10/100/1000 management LAN
Expansion Slots
Riser Card CRS2131 1 x PCIe x16 slot (Gen3 x16 or x8), Full height half-length
1 x PCIe x8 slots (Gen3 x0 or x8), Full height half-length
​1 x PCIe x8 slots (Gen3 x8), Full height half-length
Riser Card CRS2132 1 x PCIe x16 slot (Gen3 x16 or x8), Full height half-length, Occupied by CNV3124, 4 x U.2 ports
1 x PCIe x8 slots (Gen3 x0 or x8), Full height half-length
1 x PCIe x8 slots (Gen3 x8), Full height half-length
Riser Card CRS2124 1 x PCIe x8 slots (Gen3 x0), Low profile half-length
​1 x PCIe x16 slot (Gen3 x16), Low profile half-length, Occupied by CNV3124, 4 x U.2 ports
2 x OCP mezzanine slots PCIe Gen3 x16
Type1, P1, P2, P3, P4, K2, K3
​1 x OCP mezzanine slot is Occupied by CNVO124, 4 x U.2 mezzanine card
I/O
Internal 2 x Power supply connectors
4 x SlimSAS connectors
2 x SATA 7-pin connectors
2 x CPU fan headers
1 x USB 3.0 header
1 x TPM header
1 x VROC connector
1 x Front panel header
1 x HDD back plane board header
1 x IPMB connector
1 x Clear CMOS jumper
​1 x BIOS recovery jumper
Front 2 x USB 3.0
1 x Power button with LED
1 x ID button with LED
1 x Reset button
1 x NMI button
1 x System status LED
1 x HDD activity LED
​2 x LAN activity LEDs
Rear 2 x USB 3.0
1 x VGA
1 x COM (RJ45 type)
2 x RJ45
1 x MLAN
​1 x ID button with LED
Backplane Front side_CBP20O2: 24 x SATA/SAS ports
Front side_CEPM480: 8 x U.2 ports
Rear side_CBP2020: 2 x SATA/SAS ports
​Bandwidth: SATAIII 6Gb/s or SAS 12Gb/s per port
Power
Supply 2 x 1600W redundant PSUs
80 PLUS Platinum
AC Input 100-127V~/ 12A, 47-63Hz
​200-240V~/ 9.48A, 47-63Hz
DC Output Max 1000W/ 100-127V

  • +12V/ 82A
  • +12Vsb/ 2.1A

Max 1600W/ 200-240V

  • +12V/ 132A
  • ​+12Vsb/ 2.1A
Environmental
Operating temperature 10°C to 35°C
Operating humidity 8-80% (non-condensing)
Non-operating temperature -40°C to 60°C
Non-operating humidity 20%-95% (non-condensing)
Physical
Dimensions (WxHxD)  438 x 87.5 x 730
Weight  20kg

Design and Build

The GIGABYTE R281-NO0 is a 2U rackmount server. Across the front are 24 hot-swappable bays for NVMe U.2 SSDs. On the left side are LED indicator lights and button for reset, power, NMI, and ID. On the right are two USB 3.0 ports.

 

Flipping the device around to the rear we see two 2.5″ SSD/HDD bays in the upper left corner. Beneath the bays are two PSUs. Running across the bottom is a VGA port, two USB 3.0 ports, Two GbE LAN ports, a serial port, and a 10/100/1000 server management LAN port. Above the ports are six PCIe slots.

 

The top pops off fairly easy giving users’ access to the two Intel CPUs (covered by heatsinks in the photo). Here one can see all the DIMM slots as well. This server is loaded down with NVMe as seen by all the direct access cables running back to the daughterboards from the front backplane. The cables themselves are neatly laid out and don’t appear to impact airflow front to back.

GIGABYTE R281-NO0 Configuration

CPU 2 x Intel 8280
RAM 384GB of 2933HMz
Storage 12 x Micron 9300 NVMe 3.84TB

Performance

SQL Server Performance

StorageReview’s Microsoft SQL Server OLTP testing protocol employs the current draft of the Transaction Processing Performance Council’s Benchmark C (TPC-C), an online transaction processing benchmark that simulates the activities found in complex application environments. The TPC-C benchmark comes closer than synthetic performance benchmarks to gauging the performance strengths and bottlenecks of storage infrastructure in database environments.

Each SQL Server VM is configured with two vDisks: 100GB volume for boot and a 500GB volume for the database and log files. From a system resource perspective, we configured each VM with 16 vCPUs, 64GB of DRAM and leveraged the LSI Logic SAS SCSI controller. While our Sysbench workloads tested previously saturated the platform in both storage I/O and capacity, the SQL test looks for latency performance.

This test uses SQL Server 2014 running on Windows Server 2012 R2 guest VMs, and is stressed by Dell’s Benchmark Factory for Databases. While our traditional usage of this benchmark has been to test large 3,000-scale databases on local or shared storage, in this iteration we focus on spreading out four 1,500-scale databases evenly across our servers.

SQL Server Testing Configuration (per VM)

  • Windows Server 2012 R2
  • Storage Footprint: 600GB allocated, 500GB used
  • SQL Server 2014
    • Database Size: 1,500 scale
    • Virtual Client Load: 15,000
    • RAM Buffer: 48GB
  • Test Length: 3 hours
    • 2.5 hours preconditioning
    • 30 minutes sample period

For our transactional SQL Server benchmark, the R281-NO0 posted an aggregate score of 12,645 TPS, with individual VMs ranging from 3,161.1 TPS to 3,161,5 TPS.

With SQL Server average latency the server had an aggregate score as well as individual VM score of 1ms.

Sysbench MySQL Performance

Our first local-storage application benchmark consists of a Percona MySQL OLTP database measured via SysBench. This test measures average TPS (Transactions Per Second), average latency, and average 99th percentile latency as well.

Each Sysbench VM is configured with three vDisks: one for boot (~92GB), one with the pre-built database (~447GB), and the third for the database under test (270GB). From a system resource perspective, we configured each VM with 16 vCPUs, 60GB of DRAM and leveraged the LSI Logic SAS SCSI controller.

Sysbench Testing Configuration (per VM)

  • CentOS 6.3 64-bit
  • Percona XtraDB 5.5.30-rel30.1
    • Database Tables: 100
    • Database Size: 10,000,000
    • Database Threads: 32
    • RAM Buffer: 24GB
  • Test Length: 3 hours
    • 2 hours preconditioning 32 threads
    • 1 hour 32 threads

With the Sysbench OLTP the GIGABYTE saw an aggregate score of 19,154.9 TPS.

With Sysbench latency, the server had an average of 13.37ms.

In our worst-case scenario (99th percentile) latency, the server saw 24.53ms for aggregate latency.

VDBench Workload Analysis

When it comes to benchmarking storage arrays, application testing is best, and synthetic testing comes in second place. While not a perfect representation of actual workloads, synthetic tests do help to baseline storage devices with a repeatability factor that makes it easy to do apples-to-apples comparison between competing solutions. These workloads offer a range of different testing profiles ranging from “four corners” tests, common database transfer size tests, as well as trace captures from different VDI environments. All of these tests leverage the common vdBench workload generator, with a scripting engine to automate and capture results over a large compute testing cluster. This allows us to repeat the same workloads across a wide range of storage devices, including flash arrays and individual storage devices.

Profiles:

  • 4K Random Read: 100% Read, 128 threads, 0-120% iorate
  • 4K Random Write: 100% Write, 64 threads, 0-120% iorate
  • 64K Sequential Read: 100% Read, 16 threads, 0-120% iorate
  • 64K Sequential Write: 100% Write, 8 threads, 0-120% iorate
  • Synthetic Database: SQL and Oracle
  • VDI Full Clone and Linked Clone Traces

With random 4K read, the GIGABYTE R281-NO0 started at 539,443 IOPS at 114.8µs and went on to peak at 5,326,746 IOPS at a latency of 238µs.

 

4k random write showed sub 100µs performance until about 3.25 million IOPS and a peak score of 3,390,371 IOPS at a latency of 262.1µs.

 

For sequential workloads we looked at 64k. For 64K read we saw peak performance of about 640K IOPS or 4GB/s at about 550µs latency before dropping off some.

 

64K write saw a sub 100µs performance until about 175K IOPS or 1.15GB/s and went on to peak at 259,779 IOPS or 1.62GB/s at a latency of 581.9µs before dropping off some.

 

Our next set of tests are our SQL workloads: SQL, SQL 90-10, and SQL 80-20. Starting with SQL, the GIGABYTE had a peak performance of 2,345,547 IPS at a latency of 159.4µs.

 

With SQL 90-10 we saw the server peak at 2,411,654 IOPS with a latency of 156.1µs.

 

Our SQL 80-20 test had the server peak at 2,249,683 IOPS with a latency of 166.1µs.

Next up are our Oracle workloads: Oracle, Oracle 90-10, and Oracle 80-20. Starting with Oracle, the GIGABYTE R281-NO0 peaked at 2,240,831 IOPS at 165.3µs for latency.

 

Oracle 90-10 saw a peak performance of 1,883,800 IOPS at a latency of 136.2µs.

In Oracle 80-20 the server peaked at 1,842,053 IOPS at 139.3µs for latency.

 

Next, we switched over to our VDI clone test, Full and Linked. For VDI Full Clone (FC) Boot, the GIGABYTE peaked at 1,853,086 IOPS and a latency of 198µs.

Looking at VDI FC Initial Login, the server was started at 83,797 IOPS at 86.7µs and went on to pea at 808,427 IOPS with a latency of 305.9µs before dropping off some.

 

VDI FC Monday Login saw the server peaked at 693,431 IOPS at a latency of 207.6µs.

 

For VDI Linked Clone (LC) Boot, the GIGABYTE Sever peaked at 802,660 IOPS at 194µs for latency.

Looking at VDI LC Initial Login, the server saw a peak of 409,901 IOPS with 195.2µs latency.

Finally, VDI LC Monday Login had the server with a peak performance of 488,516 IOPS with a latency of 273µs.

Conclusion

The 2U GIGABYTE R281-NO0 is an all NVMe server built for performance. The server leverages two second generation Intel Xeon Scalable CPUs and supports up to 12 DIMMS per socket. Depending on the CPU choice, it supports DRAM speeds up to 2933MHz and Intel Optane PMEM. User can have up to 3TB of DRAM or a larger memory footprint with Optane. The storage setup is highly configurable, with the build we reviewed supporting 24 2.5 NVMe SSDs. And an interesting power feature is Cold Redundancy that switches the extra PSU to standby mode with the system load is under 40%, saving power costs.

For performance testing we ran our Applications Analysis Workloads as well as our VDBench Workload Analysis. For Applications Analysis Workloads we started off with SQL Server. Here we saw an aggregate transactional score of 12,645 TPS with an average latency of 1ms. Moving on to Sysbench, the GIGABYTE server gave us an aggregate score of 19,154 TPS, an average latency of 13.37ms, and a worst-case scenario of only 24.53ms.

In our VDBench Workload Analysis the server came off with some strong, impressive numbers. Peak highlights include 5.3 million IOPS for 4K read, 3.4 million IOPS for 4K write, 4GB/s for 64K read, and for 64K write of 1.62GB/s. For our SQL workloads the server hit 2.3 Million IOPS, 2.4 million IOPS for 90-10, and 2.3 million IOPS for 80-20. With Oracle we saw 2.2 million IOPS, 1.9 million IOPS for Oracle 90-10, and 1.8 million IOPS for 80-20. For our VDI Clone tests we saw 1.9 million IOPS for Boot, 808K IOPS for Initial Login, and 693K IOPS for Monday Login for Full Clone. For Linked Clone we saw 803K IOPS for Boot, 410K IOPS for Initial Login, and 489K IOPS for Monday Login.

The GIGABYTE R281-NO0 is a powerhouse of a server, capable of supporting a wide range of flash technologies. Being built around the Intel Scalable 2nd Generation hardware it also benefits from the newer CPUs supporting Optane PMEM. The server offers plenty of configurability on the storage end and some nifty power benefits. We’re most enamored by the 24 NVMe SSD bays of course; anyone with a high-performance storage need will be as well. This server from GIGABYTE is well designed to be a fantastic storage-heavy server for a variety of use cases.

GIGABYTE R281-NO0

Discuss on Reddit

Engage with StorageReview

Newsletter | YouTube | Podcast iTunes/Spotify | Instagram | Twitter | Facebook | RSS Feed