Artificial Intelligence Solutions

Deep Learning, also referred to as Artificial Intelligence is the fastest-growing field in machine learning. It uses many-layered Deep Neural Networks (DNNs) to learn levels of representation and abstraction that make sense of data such as images, sound, and text. Deep Learning is used widely in the research community to help solve many big data problems such as computer vision, speech recognition, and natural language processing.

Practical examples include:

  • Vehicle, pedestrian and landmark identification for driver assistance
  • Image recognition
  • Speech recognition and translation
  • Natural language processing
  • Life sciences

XENON offers a variety of Deep Learning solutions which focus on GPU power due to its intensive parallel computation ability.

Download XENON’s Capability Statement brochure to learn more.

These solutions include:

NVIDIA® DGX range

XENON an Australian distributor of NVIDIA DGX-2™, NVIDIA DGX-2H, NVIDIA DGX-1™ and NVIDIA DGX Station supercomputer which enables data scientists and artificial intelligence (AI) researchers to acquire the accuracy, simplicity, and speed they need for Deep Learning success. Faster training and iteration ultimately means faster innovation and time-to-market.

model no.

description

key features

The first 2 petaFLOPS system that combines 16 fully interconnected GPUs for 10X the Deep Learning performance

  • NVIDIA® Tesla V100 32GB, SXM3
  • 16 Total GPUs for both boards, 512GB total HBM2 memory
  • 12 Total NVSWitches
  • 8 EDR Infiniband/100 GbE ethernet

Optimised for the Most Demanding Large Scale AI Workloads

  • The world’s first 2.1 petaFLOPS system, powered by 16 of the world’s most advanced GPUs
  • Features faster Tesla V100 GPUs running at 450 watts per GPU
  • NVIDIA® DGX™ software
  • NVIDIA NVSwitch

The world’s first purpose-built system optimised for Deep Learning, with fully integrated hardware and software that can be deployed quickly and easily. Its revolutionary performance significantly accelerates training time, making it the world’s first Deep Learning supercomputer in a box.

  • Built with groundbreaking Pascal™-powered NVIDIA® Tesla® P100 GPU accelerators, interconnected with NVIDIA® NVLink™
  • Software stack includes major deep learning frameworks, the NVIDIA® Deep Learning SDK, the DIGITS™ GPU training system, drivers, and CUDA®,for designing the most accurate deep neural networks (DNN)
  • Applications run up to 12x faster than any previous GPU-accelerated solutions

The Personal Supercomputer for Leading-Edge AI Development

  • 3x the performance for deep learning training
  • 100x in speed-up on large data set analysis, compared with a 20 node Spark server cluster
  • 5x increase in I/O performance over PCIe-connected GPU’s with NVIDIA NVLink technology

XENON DEVCUBE G2

XENON’s new DEVCUBE G2 is ideal for deep learning, machine learning and AI at your desktop. GPUs are extremely efficient and particularly well suited to provide this capability and they are a key enabler for deep learning research, innovation and development.

All this results into:

  • faster turnaround times
  • the freedom to explore multiple network architectures
  •  accelerated dataset manipulation
  • an all in one powerful, energy-efficient, cool, and quiet solution

model no.

description

key features

Ideal for researchers, start-up companies, and anyone engaged in DL and AI exploration.

  • Intel Core i7 or i9
  • Up to 18 Cores
  • Support up to 128GB DDR4 Non-ECC, Un-buffered Memory
  • Supports up to 4 GPUs: GTX, Titan V or Quadro

;

Ideal for deep learning, machine learning and AI at your desktop

  • Intel Xeon W
  • Up to 18 Cores
  • Support up to 512GB DDR4 REG.ECC Memory
  • Supports up to 4 GPUs: GTX, Titan V or Quadro

The DEVCUBE Pilot is a beginner GPU workstation equipped to power and manage core cognitive technology platforms, including Machine Learning (ML), Artificial Intelligence (AI) and Deep Learning (DL).

  • Intel Core i9-7900X
  • Supports up to 128GB DDR4 Non-ECC, Un-buffered Memory
  • Supports up to 2 GPUs – GEFORCE RTX 2080 Ti or TITAN RTX with NVLINK™ BRIDGE

XENON GPU Servers

XENON’s new generation GPU optimised server  provides the highest levels of parallel performance for Machine/Deep Learning workloads. Compared to a symmetric Dual Processor design, the new systems deliver 21% higher throughput and 60% lower latency* plus innovative single root complex architecture design. In addition, this server  has been  thermally optimised for either active or passively cooled GPUs without preheating, and are equipped with redundant 2000W Titanium Level (96%+ efficiency) digital power supplies for better reliability and lower TCO.

model no.

description

key features

The NITRO™ GX49 rack server supports up to 10x NVIDIA® GPU accelerators

  • Dual Intel® Xeon® Scalable Processors family
  • Up to 3TB DDR4 ECC Registered DIMM
  • Support for NVIDIA Quadro/Quadro RTX and Tesla GPUs

The NITRO™ G8 GPU server can harness the increased in-memory capabilities to reduce the times required to deliver highly complex computation processes.

  • Single Intel® Xeon® Scalable Processors family
  • Up to 768GB DDR4 ECC Registered DIMM
  • Support for 2x Double Width Passive GPUs
  • Supports NVIDIA Quadro, Quadro RTX and Tesla GPUs

The NITRO™ GX18 Rack Server supports up to 4x NVIDIA® GPU accelerators in a compact 1U form factor

  • Dual Intel® Xeon® Scalable Processors family
  • Up to 1.5TB DDR4 ECC Registered DIMM (12 DIMM slots)
  • Support for 4x Double Width Passive GPUs
  • Supports NVIDIA Quadro, Quadro RTX and Tesla GPUs

The NITRO™ G29 rack server supports up to 6x NVIDIA® GPU accelerators

  • Dual Intel® Xeon® Scalable Processors family
  • Up to 2TB DDR4-2666 ECC Registered DIMM
  • Support for 6x Double Width Passive GPUs
  • Supports NVIDIA Quadro, Quadro RTX and Tesla GPUs

Supports up to 4 x NVIDIA® Tesla® V100 SXM2 GPUs and is ideal for Artificial Intelligence, Deep Learning and HPC workloads.

  • Dual Intel® Xeon® Scalable Processors family
  • Supports up to 1.5TB DDR4-2666 ECC Registered DIMM
  • Up to 300 GB/s GPU-to-GPU
  • NVIDIA® NVLINK™
  • Optimised for NVIDIA® GPUDirect™ RDMA

NVIDIA DGX POD™

NVIDIA DGX POD™ offers a proven design approach for building your GPU-accelerated AI data center with NVIDIA DGX-1, leveraging NVIDIA’s best practices and insights gained from real-world deployments.

The DGX POD™ is an optimised data centre rack containing up to nine DGX-1 servers, twelve storage servers, and three networking switches to support single and multi-node AI model training and inference using NVIDIA AI software.

The DGX POD™ is also designed to be compatible with  leading storage and networking technology providers. XENON offers a portfolio of NVIDIA DGX POD™  reference architecture solutions including: NetApp, IBM Spectrum, DDN and Pure Storage. All incorporate the best of NVIDIA DGX POD™ and are delivered as fully-integrated and ready-to-deploy solutions to make your data centre AI deployments simpler and faster.

For more information email us at info@xenon.com.au

XENON NVIDIA DGX POD

NVIDIA DGX POD™

NVIDIA® TITAN

NVIDIA® TITAN graphic cards are groundbreaking. They give you the power to accomplish things you never thought possible.

NVIDIA® Titan V

model no.

description

key features

NVIDIA® TITAN V is the most powerful graphics card ever created for the PC, driven by the world’s most advanced architecture

  • NVIDIA® Volta Architecture
  • 12 GB HBM2 Frame Buffer
  • 1455 MHz Boost Clock
  • 640 Tensor Cores
  • 640 Tensor Cores

NVIDIA® TITAN Xp harnesses the incredible computing horsepower and groundbreaking NVIDIA Pascal™ architecture

  • Pascal GPU Architecture
  • 12 GB G5X Frame Buffer
  • 11.4 Gbps Memory Speed
  • 1582 MHz Boost Clock
  • 3584 NVIDIA® CUDA® cores
  • Running at 1.5GHz
  • 11 TFLOPs of brute force

NVIDIA® Jetson Embedded Platforms

NVIDIA Jetson is the world’s leading visual computing platform for GPU-accelerated parallel processing in the mobile embedded systems market. Its high-performance, low-energy computing for deep learning and computer vision makes Jetson the ideal solution for compute-intensive embedded projects like:

  • Drones
  • Autonomous Robotic Systems
  • Mobile Medical Imaging

NVIDIA® Jetson Embedded Platforms

model no.

description

key features

The new NVIDIA® Jetson™ TX2 is a high-performance, low-power supercomputer on a module that provides extremely quick, accurate AI inferencing in everything from robots and drones to enterprise collaboration devices and intelligent cameras.

  • NVIDIA Pascal™, 256 CUDA cores
  • HMP Dual Denver 2/2 MB L2 + Quad ARM® A57/2 MB L2
  • 4K x 2K 60 Hz Encode (HEVC) 4K x 2K 60 Hz Decode (12-Bit Support)
  • 8 GB 128 bit LPDDR4 59.7 GB/s

NVIDIA® Jetson™ Xavier is the latest addition to the Jetson platform. It’s an AI computer for autonomous machines, delivering the performance of a GPU workstation in an embedded module under 30W.

  • 512-core Volta GPU with Tensor Cores
  • (2x) NVDLA Engines DL Accelerator
  • 8-core ARMv8.2 64-bit CPU, 8MB L2 + 4MB L3
  • 16GB 256-bit LPDDR4x | 137 GB/s

Enables the development of millions of new small, low-cost, low-power AI systems.

  • NVIDIA Maxwell™ architecture with 128 NVIDIA CUDA® cores
  • Quad-core ARM® Cortex®-A57 MPCore processor
  • 4 GB 64-bit LPDDR4
  • 16 GB eMMC 5.1 Flash

IBM® Power System™ Accelerated Compute Servers

IBM® Power System™ Accelerated Compute Servers deliver unprecedented performance for modern HPC, analytics, and artificial intelligence (AI). Enterprises can now deploy data-intensive workloads, like deep learning frameworks and accelerated databases, with confidence.

model no.

description

key features

A leadership HPC and AI server with 2 POWER9 with Enhanced NVLink CPUs and 4-6 NVIDIA Tesla V100 GPUs in 2U.

  • Faster I/O: up to 5.6x more I/O bandwidth than x86 servers
  • The best GPUs: 2-6 NVIDIA® Tesla® V100 GPUs with NVLink
  • Extraordinary CPUs: 2x POWER9 CPUs, designed for AI
  • Simplest AI architecture: Share RAM across CPUs and GPUs

Cisco UCS C480 ML M5 Rack Server

The C480 ML M5 rack server, developed in partnership with NVIDIA, a leader in AI computing, supports eight NVIDIA Tesla V100 Tensor Core GPUs with NVIDIA NVLink interconnect. The V100 is the world’s first GPU to break the 100 teraflops barrier of deep learning performance with a whopping 640 Tensor Cores. NVLink provides 10x the bandwidth of PCIe and connects all of the GPUs in a point-to-point network (hybrid cube mesh) that provides optimal performance for these super-fast GPUs.

model no.

description

key features

Purpose-built Cisco UCS C-Series server for Deep Learning in a 4-Rack-Unit (4RU) form factor

  • 8 NVIDIA SXM2 V100 32G modules with NVLink interconnect
  • Intel® Xeon® Scalable processors
  • Up to 28 cores per socket
  • 2666-MHz DDR4 memory 24 DIMM slots for up to 3 terabytes (TB) of total memory
  • 4 PCI Express (PCIe) 3.0