Home Enterprise Mellanox Announces Switch-IB 100Gb/s EDR InfiniBand Switch

Mellanox Announces Switch-IB 100Gb/s EDR InfiniBand Switch

by Mark Kidd

Mellanox today announced its forthcoming Switch-IB, slated to be the first 100Gb/s InfiniBand interconnect solution available on the market. The Switch-IB represents the 7th generation of Mellanox’s InfiniBand offerings and will incorporate 36 ports rated at 100Gb/s each for a total 7.2Tb/s of rated throughput. It incorporates 144 SerDes modules that operate between 1Gb/s and 25Gb/s per lane and can deliver 5.4 billion packets per second.


Mellanox today announced its forthcoming Switch-IB, slated to be the first 100Gb/s InfiniBand interconnect solution available on the market. The Switch-IB represents the 7th generation of Mellanox’s InfiniBand offerings and will incorporate 36 ports rated at 100Gb/s each for a total 7.2Tb/s of rated throughput. It incorporates 144 SerDes modules that operate between 1Gb/s and 25Gb/s per lane and can deliver 5.4 billion packets per second.

The Switch-IB will introduce new router functionality to Mellanox’s InfiniBand lineup, including full isolation and scalability designed to support clustered deployments with hundreds of thousands of individual nodes. The Switch-IB is specified for less than 130ns of latency and its routing functions support Fat-Tree, Torus, and Dragonfly+ topologies.

In addition to the new 100Gb/s InfiniBand solution, Mellanox announced that its FDR 56Gb/s InfiniBand and 10/40 Gigabit Ethernet interconnect solutions are being optimized to support the AppliedMicro X-Gene 64-bit ARM server platform. The X-Gene platform is the first to utilize AppliedMicro’s new ARM server-on-a-chip technologies.

Mellanox has also announced the availability of a new HPC-X Scalable Software Toolkit with a variety of tools to increase the scalability and performance of message communications in high-performance computing environments. HPC-X provides communication libraries to support MPI, SHMEM and PGAS programming languages, alongside performance accelerators that can integrate with Mellanox interconnect solutions as well as third-party Ethernet and InfiniBand infrastructure.

The HPC-X Scalable Software Toolkit includes the ScalableMPI message passing interface based on Open MPI, the ScalableSHMEM library for one-sided communications, the ScaleableUPC UPC parallel programming language, the MXM messaging accelerator, the FCA fabric collectives accelerator, an IPM tool for HPC performance monitoring and benchmarking.

Mellanox Solutions

Discuss This Story