Intel® Omni-Path Architecture (Intel® OPA), an element of Intel® Scalable System Framework, delivers the performance for tomorrow’s high performance computing (HPC) workloads and the ability to scale to tens of thousands of nodes—and eventually more—at a price competitive with today’s fabrics. The Intel® OPA 100 Series product line is an end-to-end solution of PCIe* adapters, silicon, switches, cables, and management software. As the successor to Intel® True Scale Fabric, this optimized HPC fabric is built upon a combination of enhanced IP and Intel® technology.

For software applications, Intel OPA will maintain consistency and compatibility with existing Intel True Scale Fabric and InfiniBand* APIs by working through the open source OpenFabrics Alliance (OFA) software stack on leading Linux* distribution releases. Intel® True Scale Fabric customers will be able to migrate to Intel® OPA through an upgrade program.

Intel® Omni-Path Host Fabric Interface (HFI)

Designed specifically for HPC, the Intel® Omni-Path Host Fabric Interface (Intel® OP HFI) uses an advanced connectionless design that delivers performance that scales with high node and core counts, making it the ideal choice for the most demanding application environments. Intel® OP HFI supports 100 Gbps per port, which means each Intel OP HFI port can deliver up to 25 GBps per port of bidirectional bandwidth. The same ASIC utilized in the Intel OP HFI will also be integrated into future Intel® Xeon® processors and used in third-party products.

Some key features:

  • Multi-core scaling – support for up to 160 contexts
  • 16 Send DMA engines (M2IO usage)
  • Efficiency – large MTU support (4 KB, 8 KB, and 10KB) for reduced per-packet processing overheads. Improved packet-level interfaces to improve utilization of on-chip resources
  • Receive DMA engine arrival notification
  • Each HFI can map ~128 GB window at 64 byte granularity
  • Up to 8 virtual lanes for differentiated QoS
  • ASIC designed to scale up to 160M messages/second and 300M bidirectional messages/second

model no.

description

key features

Low Profile PCIe Card (PCIe x16)

  • Single Port
  • QSFP28 Connector
  • Link Speed 100Gb/s
  • Power Max: 7.4/11.7W (Copper)
  • Power Max: 10.6/14.9W (Optical)

Low Profile PCIe Card  (PCIe x8)

  • Single Port
  • QSFP28 Connector
  • ~58Gb/s on 100Gb/s Link
  • Power Max: 6.3/8.3W (Copper)
  • Power Max: 9.5/11.5W (Optical)

Intel® Omni-Path Edge Switches

The Intel® Omni-Path Edge Switch consists of two models supporting 100 Gb/s for all ports, an entry-level 24-port switch for small clusters and a 48-port switch.

The larger switch, in addition to enabling a 48-port fabric in 1U, can be combined with other edge switches and directors to build much larger multi-tier fabrics. These Intel® Omni-Path Edge Switches are members of the Intel® Omni-Path Fabric 100 series of switches, host adapters, and software delivering an exceptional set of high-speed networking features and functions.

Highlights include:

  • 100Gb/s Line Rate
  • 100-110ns Switch Latency
  • Scalable, predictable low latency under load
  • Multiple Virtual Lanes (VLs) per physical port
  • Supports virtual fabric partitioning

model no.

description

key features

48 Ports at 100Gbps from a single 1U switch, which currently is 12 ports higher than its nearest competitor

  • 48 up to 100Gbps ports
  • 1U (1.75″)
  • 9.6Tb/s Capacity
  • 100Gb/s port speed

24 port switch with standard QSFP28 connectors, passive copper cable support, active cable support

  • 24 up to 100Gbps ports
  • 1U (1.75″)
  • 4.8Tb/s Capacity
  • ~58Gb/s port speed

Intel® Omni-Path Director Class Switches

The Intel® Omni-Path Director Class Switch (Intel® OP Director Class Switch), based on Intel’s next generation 48-radix switch silicon, has many innovative features that provide optimum performance for both small and large fabrics. Both switch models are dense form factor designs capable of supporting up to 768 100 Gb/s ports in a low 20U footprint.

Designed to be modular alongside edge switches, host adapters, and software, the Intel® OP Director Class Switch 100 series enables customers to tailor their system configuration to meet present and future needs.

Highlights include:

  • Scales in 32 port increments, each providing 100 Gbps bandwidth
  • Scales up to 153.6 terabits per second aggregate bandwidth
  • 300-330ns Switch Latency
  • Scalable, predictable low latency under load

model no.

description

key features

Deliver 100 Gbps port bandwidth with latency that stays low even at extreme

  • 48 up to 100Gbps
  • 20U (1.75″)
  • 19.2Tb/s
  • 1/2 Management Modules
  • Up to 24 – Leaf Modules (32 Ports)
  • Up to 8 – Spine Modules

Deliver 100 Gbps port bandwidth with latency that stays low even at extreme

  • 48 up to 100Gbps
  • 7U (1.75″)
  • 4.8Tb/s
  • 1/2 Management Modules
  • Up to 6 – Leaf Modules (32 Ports)
  • Up to 3 – Spine Modules

Intel® Omni-Path Fabric Software Components

Intel® Omni-Path Architecture software comprises the Intel® OPA Host Software Stack and the Intel® Fabric Suite.

Intel® OPA Host Software

Intel’s host software strategy is to utilize the existing OpenFabrics Alliance interfaces, thus ensuring that today’s application software written to those interfaces run with Intel® OPA with no code changes required. This immediately enables an ecosystem of applications to “just work.” All of the Intel® Omni-Path host software is open source. As with previous generations PSM provides a fast data path with an HPC-optimized lightweight software (SW) driver layer. In addition, standard I/O-focused protocols are supported via the standard verbs layer.

Intel® Fabric Suite

Provides comprehensive control of administrative functions using a mature Subnet Manager. With advanced routing algorithms, powerful diagnostic tools and full subnet manager failover, the Fabric Manager simplifies subnet, fabric, and individual component management, easing the deployment and optimization of large fabrics.

Intel® Fabric Manager GUI

Provides an intuitive, scalable dashboard and analysis tools for viewing and monitoring fabric status and configuration. The GUI may be run on a Linux or Windows desktop/laptop system with TCP/IP connectivity to the Fabric Manager.