InfiniBand is a powerful  architecture designed to support I/O connectivity for the Internet infrastructure.  For XENON, InfiniBand is  a means to expand beyond and create the next generation I/O interconnect standard in server and storage solutions. Infiniband is a desirable solution in markets such as Application Clustering, Storage Area Networks, Inter-Tier Communication and Inter-Processor Communication (IPC)  that require high bandwidth, QoS, and RAS features. XENON designs, delivers and supports Mellanox’s extensive Infiband portfolio and Intel® Omni-Path Architecture.

Mellanox Infiniband Solutions

Mellanox provides complete end-to-end solutions (silicon, adapter cards, switch systems, cables and software) supporting InfiniBand networking technologies.

Mellanox’s Infiniband portfolio consists of:

InfiniBand/VPI Adapter Cards

Data centers, high scale storage systems and cloud computing require I/O services such as bandwidth, consolidation and unification, and flexibility.

InfiniBand Switch Systems

Built with Mellanox’s 4th and 5th generation InfiniScale IV and SwitchX InfiniBand switch device, Mellanox 20, 40Gb/s, 56Gb/s and 100Gb/s InfiniBand switches provide the highest-performing fabric solution by delivering high-bandwidth and low-latency to Enterprise Data Centers, High-Performance Computing and Embedded environments. Click here for more info.

Gateway Systems

Mellanox gateway systems enables data centers to operate in high-performance 40Gb/s network speed on the hosts, while at the same time, connect to lower speed Gigabit and 10GbE LAN networks and 2, 4, 8Gb/s Fibre Channel SAN networks, providing I/O consolidation.

Long-Haul Systems

Mellanox’s family of long-haul systems delivers the highest performance and port density with a complete chassis and fabric management solution to enable compute clusters and converged data centers to operate at any scale and any distances.

Unified Fabric Manager

Mellanox’s UFM is a powerful platform for managing scale-out InfiniBand and Ethernet computing environments. UFM enables data center operators to efficiently provision, monitor and operate the modern data center fabric.

Network Infrastructure

Intel® Omni-Path Architecture

The Intel® Omni-Path Architecture (OPA) introduces a multi-generation fabric architecture designed to meet the scalability needs of datacenters ranging from the high-end of HPC to the breadth of commercial datacenters. Link level reliability and pervasive ECC enable the reliability needed for large scale systems.

Built on the foundations of its predecessor – the Intel® True Scale Fabric and additional intellectual property acquired from Cray, Intel® is looking to dominate the HPC arena with a low latency, high bandwidth cost efficient fabric.

Intel® has moved away from the Infiniband lock-in model to a more functional fabric, dedicated to HPC. This approach uses a technology called Performance Scaled Messaging (PSM) which optimised the Infiniband stack to work more efficiently at smaller message sizes, which is typically what you associate with HPC workloads; usually MPI traffic. For OPA Intel® have gone a step further, building on the original PSM architecture; Intel® acquired proprietary technology from the Cray Aries interconnect to enhance the capabilities and performance of OPA, these are at both the fabric and host level.

Key Features of the New Intel® Omni-Path Fabric:

Enhanced Performance Scaled Messaging (PSM).

The application view of the fabric is derived heavily from, and has application-level software compatible with, the demonstrated scalability of Intel® True Scale Fabric architecture by leveraging an enhanced next generation version of the Performance Scaled Messaging (PSM) library. PSM is specifically designed for the Message Passing Interface (MPI) and is very lightweight-one-tenth of the user space code-compared to using verbs.

Upgrade Path to Intel® Omni-Path

Despite not being truly Infiniband Intel® have managed to maintain compatibility with their previous generation True Scale Fabric meaning that applications that work well on True Scale can be easily migrated to OPA. OPA integrates support for both True Scale and Infiniband API’s ensuring backwards compatibility with previous generation technologies to support any standard HPC application.

Other features include:

  • Adaptive Routing
  • Dispersive Routing
  • Traffic Flow Optimization
  • Packet Integrity Protection
  • Dynamic Lane Scaling

Intel® Omni-Path Architecture consists of:

Host Fabric Interface Adapters (HFI’s)

XENON_host-fabric-interface_08052016Intel® currently has two offerings on the host fabric interface (HFI) adapter side these include a PCIe x8 58Gbps adapter and a PCIe x16 100Gbps adapter, both of these are single port adapters. Both HFI’s use the same silicon so offer the same latency capabilities and features of the high end 100Gbps card.

Along with the physical adapter cards, Supermicro will also be releasing a range of Super Servers with the Omni-Path fabric laid down on the motherboard, this will offer a tighter layer of integration and enable a more compact server design. To take this design even further Intel® have announced that they will be intergrading OPA on to future Intel® Xeon® processors, this will reduce latency further and overall increase performance of all applications.

Specific products include:

 

Intel® Omni-Path Edge and Director Class Switch 100 Series

XENON_Omni-Path-Edge-and-Director-Class-Switch_08052016The all new Edge and Director switches for Omni-Path from Intel® offer a totally different design from traditional Infiniband switches. Incorporating a new ASIC and custom front panel layout, Intel® have been able to offer up to 48 Ports at 100Gbps from a single 1U switch, this is 12 ports higher than its nearest competitor. The higher switching density allows for some significant improvements within the data centre.

Product range consists of:

 

Intel® Omni-Path Software Components

Intel® Omni-Path Architecture software comprises the Intel® OPA Host Software Stack and the Intel® Fabric Suite.

Product range includes:

Intel® OPA Host Software

Intel’s host software strategy is to utilize the existing OpenFabrics Alliance interfaces, thus ensuring that today’s application software written to those interfaces run with Intel® OPA with no code changes required.  This immediately enables an ecosystem of applications to “just work.” All of the Intel® Omni-Path host software is open source.

Intel® Fabric Suite

Provides comprehensive control of administrative functions using a mature Subnet Manager. With advanced routing algorithms, powerful diagnostic tools and full subnet manager failover, the Fabric Manager simplifies subnet, fabric, and individual component management, easing the deployment and optimization of large fabrics.

Intel® Fabric Manager GUI

Provides an intuitive, scalable dashboard and analysis tools for viewing and monitoring fabric status and configuration.a The GUI may be run on a Linux or Windows desktop/laptop system with TCP/IP connectivity to the Fabric Manager.

XENON_omni-path-fabric-software-screen-shot-rwd_08052016