InfiniBand/VPI Adapter Cards

Mellanox InfiniBand Host Channel Adapters (HCAs) provide the highest performing interconnect solution for Enterprise Data Centers, Web 2.0, Cloud Computing, High-Performance Computing, and embedded environments. Clustered data bases, parallelized applications, transactional services and high-performance embedded I/O applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation.

model no.

description

key features

Intelligent ConnectX-5 adapter cards, the newest additions to the Mellanox Smart Interconnect suite and supporting Co-Design and In-Network Compute, introduce new acceleration engines.

  • EDR 100Gb/s InfiniBand or 100Gb/s Ethernet per port and all lower speeds
  • Up to 200M messages/second
  • Tag Matching and Rendezvous Offloads
  • Adaptive Routing on Reliable Transport

 

Supporting EDR 100Gb/s InfiniBand and 100Gb/s Ethernet connectivity, provide the highest performance and most flexible solution.

  • EDR 100Gb/s InfiniBand or 100Gb/s Ethernet per port
  • 1/10/20/25/40/50/56/100Gb/s speeds
  • 150M messages/second
  • Single and dual-port options available

Connect-IB adapter cards provide the highest performing and most scalable interconnect solution for server and storage systems.

  • Greater than 100Gb/s over InfiniBand
  • Greater than 130M messages/sec
  • 1us MPI ping latency
  • PCI Express 3.0 x16

ConnectX-3 Pro adapter cards with Virtual Protocol Interconnect (VPI), supporting InfiniBand and Ethernet connectivity with hardware offload engines to Overlay Networks (“Tunneling”).

  • Virtual Protocol Interconnect
  • 1us MPI ping latency
  • Up to 56Gb/s InfiniBand or 40 Gigabit Ethernet per port
  • Single- and Dual-Port options available

ConnectX-3 adapter cards with Virtual Protocol Interconnect (VPI) supporting InfiniBand and Ethernet connectivity provide the highest performing and most flexible interconnect solution for PCI Express Gen3 servers used in Enterprise Data Centers, High-Performance Computing, and Embedded environments

  • Virtual Protocol Interconnect
  • 1μs MPI ping latency
  • Up to 56Gb/s InfiniBand or 40 Gigabit Ethernet per port
  • Single- and Dual-Port options available

InfiniBand Switch Systems

Mellanox’s family of InfiniBand switches deliver the highest performance and port density with complete fabric management solutions to enable compute clusters and converged data centers to operate at any scale while reducing operational costs and infrastructure complexity.

model no.

description

key features

36-port Non-blocking Managed EDR 100Gb/s InfiniBand Smart Switch

  • 36 X EDR 100Gb/s ports in a 1U switch
  • 7Tb/s aggregate switch throughput
  • Up to 7.02 billion messages-per-second
  • 90ns switch latency
  • 136W typical power consumption

36-port Non-blocking Externally Managed EDR 100Gb/s InfiniBand Smart Switch

  • 36 X EDR 100Gb/s ports in a 1U switch
  • 7Tb/s aggregate switch throughput
  • 90ns switch latency
  • 136W typical power consumption

Highest performing fabric solution by delivering high bandwidth and low-latency to Enterprise Data Centers and High-Performance Computing environments in a 12U chassis.

  • 216 EDR (100Gb/s) ports in a 12U switch
  • 43Tb/s switching capacity
  • Ultra-low latency
  • IBTA Specification 1.3 and 1.2.1 compliant

Delivering high bandwidth and low-latency to Enterprise Data Centers and High-Performance Computing environments in a 16U chassis.

  • 324 EDR (100Gb/s) ports in a 16U switch
  • 64Tb/s switching capacity
  • Ultra-low latency
  • IBTA Specification 1.3 and 1.2.1 compliant

The highest performing fabric solution by delivering high bandwidth and low-latency to Enterprise Data Centers and High-Performance Computing environments in a 28U chassis.

  • 648 EDR (100Gb/s) ports in a 28U switch
  • 130Tb/s switching capacity
  • Ultra-low latency
  • IBTA Specification 1.3 and 1.2.1 compliant

Mellanox Gateway Systems

Mellanox’s InfiniBand to Ethernet gateway functionality built-in within Mellanox switches provides the most cost-effective, high-performance solution for data center unified connectivity solutions. Mellanox’s gateways enable data centers to operate at up to 56Gb/s network speeds while seamlessly connecting to 1, 10 and 40 Gigabit Ethernet networks. Existing LAN infrastructures and management practices can be preserved, easing deployment and providing significant return-on-investment.

model no.

description

key features

High-performance, low-latency 56Gb/s FDR InfiniBand to 40Gb/s Ethernet gateway built with Mellanox’s 6th generation SwitchX®-2 InfiniBand switch device.

  • 36 56Gb/s ports in a 1U switch
  • Up to 4Tb/s aggregate switching capacity
  • 400ns latency between InfiniBand and Ethernet
  • Optional redundant power supplies and fan drawers

A fully flexible 36 EDR 100Gb/s ports, which can be split among six different subnets.

  • 36 EDR 100b/s ports in a 1U system
  • Up to 7Tb/s aggregate data capacity
  • Ultra-low latency (100ns) latency between InfiniBand subnets
  • Optional redundant power supplies and fan drawers

Long-Haul Systems

BridgeX is the first VPI (Virtual Protocol Interconnect) gateway allowing the OEMs to design I/O consolidation solutions using InfiniBand or Ethernet as the convergence fabric of choice. A unified server I/O, where multiple traffic types can run over a single physical connection can help cut I/O cost and power significantly while reducing total cost of ownership through reduced number of ports to manage, reduced cabling complexity and simpler fabric management. At the same time, connectivity to IP/Ethernet based LAN and NAS infrastructures, and Fibre Channel based SAN infrastructures must remain seamless.

model no.

description

key features

Extends Mellanox InfiniBand solutions from a single-location data center network to distances of up to 1km

  • 16 Long-haul (40Gb/s) ports in a 1U system
  • Up to 640Gb/s long-haul aggregate data
  • 16 Downlink (56Gb/s) VPI ports
  • Compliant with IBTA 1.2.1 and 1.3

Extends Mellanox switch solutions from a single-location data center network to distances of up to 10km

  • 6 Long-haul (40Gb/s) ports in a 1U system
  • Up to 240Gb/s long-haul aggregate data
  • 6 Downlink (56Gb/s) VPI ports
  • Compliant with IBTA 1.2.1 and 1.3

Supports 2 long-haul ports running 40Gb/s to a distance up to 40km.

  • 2 Long haul (40Gb/s) ports in a 2U system
  • Up to 80Gb/s long-haul aggregate data
  • 2 Downlink (56Gb/s) VPI ports
  • Compliant with IBTA 1.2.1 and 1.3

Extends Mellanox InfiniBand solutions from a single-location data center network to distances of up to 80km

  • 1 Long haul (40Gb/s) port in a 2U system
  • Up to 40Gb/s long-haul aggregate data
  • 1 Downlink (56Gb/s) VPI ports
  • Compliant with IBTA 1.2.1 and 1.3

Unified Fabric Manager (UFM®)

Mellanox’s Unified Fabric Manager (UFM®)is a powerful platform for managing scale-out computing environments. UFM enables data center operators to monitor, efficiently provision, and operate the modern data center fabric. UFM eliminates the complexity of fabric management, provides deep visibility into traffic and optimizes fabric performance.

Fabric Visibility & Control

UFM includes an advanced granular monitoring engine that provides real time access to health and performance, switch and host data, enabling:

  • Real-time identification of fabric-related errors and failures
  • Insight into fabric performance and potential bottlenecks
  • Preventive maintenance via granular threshold-based alerts
  • SNMP traps and scriptable actions
  • Correlation of monitored data to application/ service level enabling quick and effective fabric analysis

Contact XENON for more information.