InfiniBand/VPI Adapter Cards

Mellanox InfiniBand Host Channel Adapters (HCAs) provide the highest performing interconnect solution for Enterprise Data Centers, Web 2.0, Cloud Computing, High-Performance Computing, and embedded environments. Clustered data bases, parallelized applications, transactional services and high-performance embedded I/O applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation.

model no.

description

key features

Intelligent ConnectX-5 adapter cards, the newest additions to the Mellanox Smart Interconnect suite and supporting Co-Design and In-Network Compute, introduce new acceleration engines.

  • EDR 100Gb/s InfiniBand or 100Gb/s Ethernet per port and all lower speeds
  • Up to 200M messages/second
  • Tag Matching and Rendezvous Offloads
  • Adaptive Routing on Reliable Transport

 

Supporting EDR 100Gb/s InfiniBand and 100Gb/s Ethernet connectivity, provide the highest performance and most flexible solution.

  • EDR 100Gb/s InfiniBand or 100Gb/s Ethernet per port
  • 1/10/20/25/40/50/56/100Gb/s speeds
  • 150M messages/second
  • Single and dual-port options available

Connect-IB adapter cards provide the highest performing and most scalable interconnect solution for server and storage systems.

  • Greater than 100Gb/s over InfiniBand
  • Greater than 130M messages/sec
  • 1us MPI ping latency
  • PCI Express 3.0 x16

ConnectX-3 Pro adapter cards with Virtual Protocol Interconnect (VPI), supporting InfiniBand and Ethernet connectivity with hardware offload engines to Overlay Networks (“Tunneling”).

  • Virtual Protocol Interconnect
  • 1us MPI ping latency
  • Up to 56Gb/s InfiniBand or 40 Gigabit Ethernet per port
  • Single- and Dual-Port options available

ConnectX-3 adapter cards with Virtual Protocol Interconnect (VPI) supporting InfiniBand and Ethernet connectivity provide the highest performing and most flexible interconnect solution for PCI Express Gen3 servers used in Enterprise Data Centers, High-Performance Computing, and Embedded environments

  • Virtual Protocol Interconnect
  • 1μs MPI ping latency
  • Up to 56Gb/s InfiniBand or 40 Gigabit Ethernet per port
  • Single- and Dual-Port options available

InfiniBand Switch Systems

Mellanox’s family of InfiniBand switches deliver the highest performance and port density with complete fabric management solutions to enable compute clusters and converged data centers to operate at any scale while reducing operational costs and infrastructure complexity.

model no.

description

key features

36-port Non-blocking Managed EDR 100Gb/s InfiniBand Smart Switch

  • 36 X EDR 100Gb/s ports in a 1U switch
  • 7Tb/s aggregate switch throughput
  • Up to 7.02 billion messages-per-second
  • 90ns switch latency
  • 136W typical power consumption

36-port Non-blocking Externally Managed EDR 100Gb/s InfiniBand Smart Switch

  • 36 X EDR 100Gb/s ports in a 1U switch
  • 7Tb/s aggregate switch throughput
  • 90ns switch latency
  • 136W typical power consumption

Highest performing fabric solution by delivering high bandwidth and low-latency to Enterprise Data Centers and High-Performance Computing environments in a 12U chassis.

  • 216 EDR (100Gb/s) ports in a 12U switch
  • 43Tb/s switching capacity
  • Ultra-low latency
  • IBTA Specification 1.3 and 1.2.1 compliant

Delivering high bandwidth and low-latency to Enterprise Data Centers and High-Performance Computing environments in a 16U chassis.

  • 324 EDR (100Gb/s) ports in a 16U switch
  • 64Tb/s switching capacity
  • Ultra-low latency
  • IBTA Specification 1.3 and 1.2.1 compliant

The highest performing fabric solution by delivering high bandwidth and low-latency to Enterprise Data Centers and High-Performance Computing environments in a 28U chassis.

  • 648 EDR (100Gb/s) ports in a 28U switch
  • 130Tb/s switching capacity
  • Ultra-low latency
  • IBTA Specification 1.3 and 1.2.1 compliant

Unified Fabric Manager (UFM®)

Mellanox’s Unified Fabric Manager (UFM®)is a powerful platform for managing scale-out computing environments. UFM enables data center operators to monitor, efficiently provision, and operate the modern data center fabric. UFM eliminates the complexity of fabric management, provides deep visibility into traffic and optimizes fabric performance.

Fabric Visibility & Control

UFM includes an advanced granular monitoring engine that provides real time access to health and performance, switch and host data, enabling:

  • Real-time identification of fabric-related errors and failures
  • Insight into fabric performance and potential bottlenecks
  • Preventive maintenance via granular threshold-based alerts
  • SNMP traps and scriptable actions
  • Correlation of monitored data to application/ service level enabling quick and effective fabric analysis

Contact XENON for more information.