Home Enterprise Test Lab Networking Overview

Enterprise Test Lab Networking Overview

icon_sr

Storage Area Networks (SANs) leverage bus-style network architectures to accelerate and decentralize access to enterprise storage. Accelerating storage access therefore encompasses accelerating SAN performance as well as the storage media itself, making it critical to equip the StorageReview Enterprise Test Lab with a variety of interconnect options across the three key network protocol families: Ethernet, InfiniBand and Fibre Channel. These interconnect options provide the means to perform comprehensive benchmarking based on conditions comparable to what SAN administrators actually experience in the field.


Storage Area Networks (SANs) leverage bus-style network architectures to accelerate and decentralize access to enterprise storage. Accelerating storage access therefore encompasses accelerating SAN performance as well as the storage media itself, making it critical to equip the StorageReview Enterprise Test Lab with a variety of interconnect options across the three key network protocol families: Ethernet, InfiniBand and Fibre Channel. These interconnect options provide the means to perform comprehensive benchmarking based on conditions comparable to what SAN administrators actually experience in the field.

All-flash storage arrays and hybrid arrays are the storage technologies that have been pushing the development of faster interconnects, but administrators of large hard disk pools also benefit from paying close attention to SAN performance and reliability. Today, access to converged infrastructure interconnect platforms can help avoid getting locked into one protocol or physical link layer, but interconnect choices made during the architecture of storage networks can affect SAN capabilities and performance for years to come.

A wide variety of options for SAN implementation, including cabling and connector choices, can make it difficult to tell at first glance which interconnect combinations and configurations are most suitable for a particular device or application. This overview of the Enterprise Test Lab’s networking gear will start with a discussion of each of the three most common interconnect protocols (Ethernet, InfiniBand and Fibre Channel) as an overview of the options for SAN architects and technicians. We’ll then break down and illustrate the key physical link layer options available for StorageReview enterprise benchmarks.

Ethernet

The StorageReview lab is equipped with Ethernet switching hardware from Netgear and Mellanox that enables us to deploy connectivity options from 1GbE to 40GbE. For 1GbE benchmarks, we use Netgear ProSafe prosumer switches. Our 10GbE connectivity is powered by a Netgear ProSafe Plus XS708E utilizing 8P8C connectors and a Mellanox SX1036 10/40Gb, with fan-out cables that break out four SFP+ Ethernet ports from a single QSFP port.

Although the term Ethernet is often used synonymously with twisted pair copper cabling that features an 8P8C connector (commonly referred to as a RJ45 connector), Ethernet refers to a network communications standard which can be employed over both copper and fiber optic physical links. For relatively short runs, copper cables often offer a compelling price-to-performance standard for Ethernet applications, but as required data transmission speeds increase and transmission distances grow, fiber optic cable begins to offer a competitive advantage over copper.

We see new gear with Ethernet at three tiers:

  • 1GbE – Currently in 2013, most home and prosumer gear is designed with on-board 1000BASE-T gigabit Ethernet, along with gear for enterprise workstations, SMB NAS/SAN, and on-board server LAN interfaces. We are beginning to see more and more 10GBASE-T twisted pair Ethernet shipping standard on server and storage hardware, but for now gigabit speeds are the de facto minimum standard for wired network connectivity.
  • 10GbE – In the first decade of the 2000s, 10 gigabit speeds required fiber optic or short-run Twinax via SPF-style connectors. Copper and the 8P8C connector are now making inroads into SAN architectures due to the growing availability of 10GBASE-T network hardware, which allows ten-gigabit speeds over cat6 and cat7. 10GbE over twisted pair copper has made 10GBASE-T an affordable new choice for high-speed storage interconnect, although fiber optic still dominates the marketplace. Our Netgear M7100 Switch features twenty-four 10GBase-T ports and 4 SFP+ ports, while our Netgear ProSafe Plus XS708E features eight 10GbE ports and one shared 10G fiber SFP+ port.
  • 40GbE – Via QSFP+ which consolidates four SFP+ lanes into one connector, we’re able to achieve 40GbE connectivity often necessary for the fastest modern storage arrays. Converged infrastructure hardware, such as the Mellanox SX6036 we use in the lab, provides options for 40GbE or 56Gb/s InfiniBand depending on configuration. A Mellanox SX1036 10/40Gb Ethernet Switch serves as the backbone for the Lab’s network.

InfiniBand

InfiniBand became widely available in the early 2000s with switch manufacturers Mellanox and QLogic driving the standard forward. InfiniBand offers five tiers of link performance: single data rate (SDR) at 2.5Gb/s per lane, double data rate (DDR) at 5Gb/s, quad data rate (QDR) at 10Gb/s, fourteen data rate (FDR) at 14Gb/s, and enhanced data rate (EDR) at 26Gb/s per unidirectional lane.

InfiniBand can be used with copper or fiber optic cabling depending on the required performance and distance required for the interconnect. Today, SFF-8470 connectors, often simply called InfiniBand Connectors, commonly terminate cables along with SFP-style connectors. The StorageReview Enterprise Test Lab is home to a 56Gb/s Mellanox InfiniBand fabric which is used in benchmarking flash storage appliance, including our MarkLogic NoSQL Database benchmark.

The MarkLogic NoSQL benchmark compares storage performance across a variety of devices including PCIe Application accelerators, groups of SSDs, and large HDD arrays. We use a Mellanox InfiniBand fabric powered by our Mellanox SX6036 to provide the interconnects for these benchmarks because of its 56Gb/s network throughput as well as support for the iSER (iSCSI-RDMA) and SRP (SCSI RDMA) protocols. iSER and SRP replace the iSCSI TCP stack with Remote Direct Memory Access (RDMA), which enables greater efficiency in a clustered environment by allowing network traffic to bypass the systems’ CPUs and allowing data to be copied from the sending systems’ memory directly to the receiving systems’ memory. These features are also brought over to Ethernet with the iSCSI over RDMA (iSER) protocol.

Having this InfiniBand fabric in place, in conjunction with a EchoStreams GridStreams Quad-Node Server and a standardized set of MarkLogic NoSQL benchmarks, allows us to push high-performance new storage devices such as the Micron P320h 2.5″ PCIe Application Accelerator to see how manufacturer’s specifications compare to performance under real-world conditions. Later this year we will be adding a new VMware benchmark which also leverages StorageReview’s InfiniBand SAN.

Fibre Channel Protocol

By the end of the 1990s, Fibre Channel was the dominant interconnect protocol for high-speed computing and storage area networks (SAN). Creating some confusion, Fibre Channel is used as shorthand for Fibre Channel Protocol, a serial, channel-based communication protocol that is often deployed across copper as well as fiber optic cable. Fibre Channel’s physical link layer allows a single communication route to be routed across more than one type of link technology, including mixing copper and fiber optic segments, in order to reach its destination.

The StorageReview lab incorporates both 8Gb/s and 16Gb/s FC connectivity via a QLogic SB5800V 8Gb Fibre Channel switch and a Brocade 6510 16Gb FC switch. For converged infrastructure applications, we use 16Gb QLogic dual-port HBAs, and in other situations we also utilize single-port 16Gb Emulex Fibre Channel HBAs. One key difference between Fibre Channel gear and other SAN infrastructure is pricing and licensing. Whereas Ethernet switches are generally available with a set number of ports (commonly 12, 24, or 48) that are all active upon purchase, it is common for FC manufacturers to use a per-port licensing model, where a Fibre Channel switch may feature 48 physical ports but only include bundled licenses for 12 or 24 of these with the initial purchase. Additional ports and sometimes other functionality is enabled as-needed by purchasing additional licenses.

Fibre Channel is the first SAN protocol to be supported by QLogic’s Mt. Rainier technology, which combines QLogic SAN HBAs with server-based flash storage and software to provide OS-independent caching via a PCIe card installed in servers. Mt. Rainier centers on a QLogic PCIe HBA based adapter card for logic and cache management as well as the storage itself, which then connects to the flash tier. The net result is local cache for the compute server via an interface card which would have been required for Fibre Channel SAN connectivity regardless. To applications running on a server or a cluster, the cache looks like SAN storage, but it’s faster to access.

This approach, leveraging the SAN infrastructure to provide local and cluster-wide caching, demonstrates the significant room for innovation in the current generation of Fibre Channel technology. Installing more than one of QLogic’s FabricCache adapters in a cluster allows the cache to be shared throughout – integration with the Fibre Channel fabric allows the on-card cache to be leveraged at both the local and cluster scales.

Physical Components

Twisted Pair Copper Cable (Cat Cables)

Twisted pair copper cable is the most widely used interconnect outside of high-speed storage networks, although with the arrival of gigabit and 10 gigabit copper interconnects, twisted pair has been able to make gains against fiber optic cable for high-speed SAN applications. The “cat” (category) family of twisted pair cables has the advantages of low cost, ease of manufacturing, and a connector type which has remained relatively unchanged through several generations of network technologies.

Twisted pair copper cable is rated by a generation number which reflects its maximum data transmission rate and maximum rated link distance. Beginning in the 1990s, a succession of “cat” generations have brought 100Mb/s connectivity (cat5), 1Gb/s (cat5e), 10Gb/s (cat6) and increased maximum link distance (cat7).

Twinax

Originally used in early microcomputer systems, twinaxial cabling (Twinax) has found continued use as a physical layer for 10GbE interconnects in the 21st century. Due to its simple construction, Twinax offers cost advantages over fiber optic cables for short runs, but nonetheless is more expensive than 10GbE via cat6 or cat7 twisted pair. If twisted pair cable continues to gain ground in SAN interconnect applications, Twinax may once again be relegated to legacy and niche applications.

Fiber Optic Cable

Fiber optic cables offer a number of technical advantages over copper for data transmission, though their much greater cost and technicians’ lack of practical means to create their own cables in the data center have led many enterprises and data centers to reconsider copper for interconnects. Among fiber optic’s advantages are longer maximum transmission lengths and a lack of vulnerability to electromagnetic ‘noise’ from nearby equipment or improper installation.

Fiber optic is found in two varieties: single mode and multi-mode. Single mode features one inner strand and is capable of greater maximum data rates and distances between network segments. However, it requires the use of more expensive lasers which can drive up the cost of network devices. Multi-mode is tolerant of less expensive cable materials and can be driven by simpler, less expensive LED lights rather than lasers.

At present, optical remains the universal cable for enterprise SAN implementation. Modern fiber optic interconnects generally feature industry standard sub-connectors, making upgrades and protocol changes a matter of swapping transceivers to use the same link for Ethernet, Fibre Channel, or InfiniBand. Initial costs for fiber optic installation are high, but quality cable has a long lifespan, as bandwidth upgrades often only require upgrading transceivers and switch hardware.

SFP-Style Transceiver/Connectors

During the last 10 years, SFP and SFP’s successors have commanded a growing segment of the interconnect market due to the flexibility of the form factor. Most of what we see in 2013 features SFP+ or QSFP, although there are other variants.

  • SFP: Small Form-Factor Pluggable, designed for speeds up to 1Gb/s
  • SFP+: Enhanced Small Form-Factor Pluggable, designed for speeds up to 10Gb/s and backwards compatible with SFP
  • QSFP: Quad Small Form-Factor Pluggable, designed to support 40Gb/s and faster connections, this form factor combines four interfaces into one transceiver

Looking Forward

With suport from Brocade, Emulex, HP, Intel, Mellanox, Netgear, QLogic and others the StorageReview lab is equipped for any interconnect need that could arise. In a heterogeneous environment like this, flexibility is key, there’s no telling when a portion of the Fibre backbone may need to be provisioned for a new array or when we’ll see a standard small NAS that supports quad-Ethernet link aggregation. Not all enterprise or SMB environments will need to be that flexible of course and to that end, some storage providers are talking more about modularity than locking buyers into specific interconnect requirements. NetApp for instance has swappable Fibre and 10GbE cards to make their arrays easier to slot into environments that may have standardized on one or the other.

The interconnect guys are doing something similar as well, SFP+ switches are now available that make it simple for technicians to switch between link media and protocols without tearing out and rebuilding. QLogic converged Host Bust Adaptors (HBAs) support changing ports between Ethernet and Fibre Channel by swapping transceivers and cables as well as changing a setting in the card’s BIOS. Mellanox has been an early leader in networking cards that utilize virtual interconnects, allowing the same NIC to operate via InfiniBand or Ethernet by connecting a different cable and loading a different driver.

The momentum towards converged infrastructure is running parallel to the trend towards virtualization. The implications of the most recent generations of SAN and virtualization technology point to steady movement to decouple physical infrastructure not only from the servers and applications that run on them, but also from the networking layer that connects the infrastructure itself. Rather than seeing competing interconnect platforms moving towards silos and competition between technologies, the market is instead driving towards inter-compatibility and heterogeneous, converged infrastructures.