by Josh Linden

Mellanox Announces Connect-IB Host Channel Adapter Cards with 100 Gb/s Throughput

Mellanox Technologies has announced Connect-IB, a series of host channel adapter cards which offer up to 100Gb/s, sub-microsecond ltency and 130 million messages per second. Connect-IB is optimized for multi-tenant environments of hundreds of Virtual Machines per server, using PCI Express 3.0 x16 and FDR InfiniBand to offload CPU protocol processing and data movement to the interconnect.

The Connect-IB product line consists of single and dual port adapters for PCI Express 3.0 with options for x8 and x16 host bus interfaces as well as a single port adapter for PCI Express 2.0 x16. Each port supports FDR 56Gb/s InfiniBand with MPI ping latency less than 1us. All Mellanox HCAs support CPU offload of transport operations.

Connect-IB Key Features

  • Greater than 100Gb/s over InfiniBand
  • Greater than 130M messages/sec
  • 1us MPI ping latency
  • CPU offload of transport operations
  • Application offload
  • GPU communication acceleration
  • Dynamic Transport operation support
  • New data operations, including noncontinuous memory transfers
  • End-to-end QoS and congestion control
  • Hardware-based I/O virtualization
  • RoHS-R6

New applications such as Big Data analytics and in-memory computing along with next-generation compute servers like Intel’s Romley benefit from parallel execution and RDMA (Remote Direct Memory Access). Connect-IB adapters will be supported by Windows and Linux distributions and offer OpenFabrics-based RDMA protocols and software.

Availability
Adapter cards are sampling now to select customers; general availability is expected in Q3 2012.

Mellanox Connect-IB

Discuss This Story