Benefits

GPU-accelerated computing is the use of a graphics processing unit (GPU) together with a CPU to accelerate scientific, engineering, and enterprise applications. Pioneered in 2007 by NVIDIA®, GPUs now power energy-efficient datacentres in government labs, universities, enterprises, and small-and-medium businesses around the world.

How Applications Accelerate with GPUs?
GPU-accelerated computing offers unprecedented application performance by offloading compute-intensive portions of the application to the GPU, while the remainder of the code still runs on the CPU. From a user’s perspective, applications simply run significantly faster.

NVIDIA® Tesla® GPU accelerators

Accelerate your most demanding HPC, hyperscale, and enterprise data center workloads with NVIDIA® Tesla® GPU accelerators. Scientists can now crunch through petabytes of data up to 10x faster than with CPUs in applications ranging from energy exploration to deep learning.

NVIDIA® Tesla® GPU accelerators

model no.

description

key features

The world’s most advanced data center GPU ever built to accelerate AI, HPC, and graphics

  • Powered by NVIDIA Volta architecture
  • Comes in 16 and 32GB configurations
  • 640 Tensor Cores
  • 100 teraFLOPS (TFLOPS) barrier of DL performance

NVIDIA® T4 GPU, the world’s most performant scale-out accelerator. Its low-profile, 70W design is powered by NVIDIA Turing™ Tensor Cores

  • NVIDIA Turing™ GPU architecture
  • 320 NVIDIA Turing Tensor Cores
  • 2,560 NVIDIA CUDA® Cores
  • 70-watt, small PCIe form factor

With 47 TOPS (Tera-Operations Per Second) of inference performance and INT8 operations per GPU, a single server with 8 Tesla P40s delivers the performance of over 140 CPU servers

  • The world’s fastest processor for inference workloads
  • 47 TOPS of INT8 for maximum inference throughput and responsiveness
  • Hardware-decode engine capable of transcoding and inferencing 35 HD video streams in real time

Designed specifically for data centers that are virtualising desktop graphics. It’s a dual-slot PCI Express form factor for rack and tower servers capable of supporting 32 concurrent users.

  • 4,096 NVIDIA®CUDA® cores
  • 16GB of GDDR5 memory
  • 7.4 TFLOPS single-precision performance
  • GRID 2.0 vGPU software support

NVIDIA DGX Systems

Inspired by the demands of deep learning and analytics, NVIDIA® DGX™ Systems are the essential instruments for AI research built on the new NVIDIA Volta™ GPU platform.

NVIDIA DGX Systems

model no.

description

key features

The world’s most powerful AI system for the most complex AI challenges

  • First 2 petaFLOPS system
  • 16 fully interconnected GPUs for 10X the Deep Learning performance
  • NVIDIA® DGX™ software
  • NVIDIA NVSwitch

Optimised for the Most Demanding Large Scale AI Workloads

  • The world’s first 2.1 petaFLOPS system, powered by 16 of the world’s most advanced GPUs
  • Features faster Tesla V100 GPUs running at 450 watts per GPU
  • NVIDIA® DGX™ software
  • NVIDIA NVSwitch

The new NVIDIA® DGX-1 is similar to the previous generation offering based on Pascal, but is powered by eight Tesla V100s GPUs, linked together via next-gen NVIDIA® NVLink interconnect technology

  • Eight Tesla V100 GPU accelerators
  • Connected by 300GB/s NVLink™ technology
  • Up to 960 TFlops of peak performance
  • 5,120 Tensor Cores

The Personal Supercomputer for Leading-Edge AI Development

  • 3x the performance for Deep Learning training
  • 100x in speed-up on large data set analysis, compared with a 20 node Spark server cluster
  • 5x increase in I/O performance over PCIe-connected GPU’s with NVIDIA NVLink technology

XENON GPU Servers

XENON rackmount GPU servers and systems are fully optimised for the latest GPU computing modules and deliver 10X higher application performance than the latest multi-core CPU systems, providing unprecedented real-world scalability and breakthrough power-efficiency across a wide array of HPC applications.

XENON GPU Servers

model no.

description

key features

The NITRO™ GX49 rack server supports up to 10x NVIDIA® GPU accelerators

  • Dual Intel® Xeon® Scalable Processors family
  • Up to 3TB DDR4 ECC Registered DIMM
  • Support for Nvidia Quadro / Quadro RTX and Tesla GPUs

The NITRO™ G8 GPU server can harness the increased in-memory capabilities to reduce the times required to deliver highly complex computation processes.

  • Single Intel® Xeon® Scalable Processors family
  • Up to 768GB DDR4 ECC Registered DIMM
  • Support for 2x Double Width Passive GPUs
  • Supports Nvidia Quadro, Quadro RTX and Tesla GPUs

The NITRO™ GX18 Rack Server supports up to 4x NVIDIA® GPU accelerators in a compact 1U form factor

  • Dual Intel® Xeon® Scalable Processors family
  • Up to 1.5TB DDR4 ECC Registered DIMM (12 DIMM slots)
  • Support for 4x Double Width Passive GPUs
  • Supports Nvidia Quadro,  Quadro RTX and Tesla GPUs

The NITRO™ G29 rack server supports up to 6x NVIDIA® GPU accelerators

  • Dual Intel® Xeon® Scalable Processors family
  • Up to 2TB DDR4-2666 ECC Registered DIMM
  • Support for 6x Double Width Passive GPUs
  • Supports Nvidia Quadro, Quadro RTX and Tesla GPUs

Supports up to  4 x NVIDIA® Tesla® V100 SXM2 GPUs and is ideal for: Artificial Intelligence, Deep Learning and  HPC workloads.

  • Dual Intel® Xeon® Scalable Processors family
  • Supports up to 1.5TB DDR4-2666 ECC Registered DIMM
  • Up to 300 GB/s GPU-to-GPU
  • NVIDIA® NVLINK™
  • Optimised for NVIDIA® GPUDirect™ RDMA

XENON GPU Personal SuperComputers

Turn your standard workstations into powerful personal supercomputers and receive cluster level performance right at your desk. Graphics Processing Units (GPUs) are outstanding at delivering performance where massively parallel floating point calculations are required.

XENON’s NITRO™ range of personal supercomputers are equipped with NVIDIA® Tesla® GPUs and the CUDA® architecture, to deliver breakthrough performance for parallel computing applications.

XENON GPU Personal SuperComputers

model no.

description

key features

The XENON Nitro™ T8 supports (5) GPU processors for intensive computational work

  • Dual CPU sockets
  • Support for Intel® Xeon® Scalable processors
  • Max. Core 56
  • Up to 2TB ECC 3DS LRDIMM
  • Up to DDR4-2666MHz; 16 DIMM slots

A state of the art deskside personal supercomputer that supports up to 4GPUs. It delivers unmatched graphics compute per cubic centimeter, and providing the highest visual compute density enabling breakthrough levels of capability and productivity.

  • Dual Intel® Xeon® Processor E5-2600 v4 family
  • Up to 1TB DDR4 ECC Registered DIMM
  • Support for 4x Double Width Passive GPUs
  • Support for GTX and Titan X Active GPUs
  • 8x 3.5” hot-swap Drive bays

Optimised for Animation/Visualization and High Performance Computing environments that require intensive processing power for visualisation, data-modelling, media production and design.

  • Dual Intel® Xeon® Processor E5-2600 v4 family
  • Up to 1TB DDR4 ECC Registered DIMM
  • Support for 3x Double Width Active GPUs
  • Support for GTX and Titan X Active GPUs

Optimised for Visual and High Performance Computing environments that require intensive processing power for visualisation, data-modelling, media production and design.

  • Single Intel® Xeon® E5-2600/1600 v4 family
  • Up to 512GB DDR4 ECC Registered DIMM
  • Support for 3x Double Width Active GPUs
  • Support for GTX and Titan X Active GPUs

Ideal for researchers, start-up companies, and anyone engaged in DL  and AI exploration.

  • Intel Core i7 or i9
  • Up to 18 Cores
  • Support up to 128GB DDR4 Non-ECC, Un-buffered Memory
  • Supports up to 4 GPUs: GTX, Titan V or Quadro

Ideal for Deep Learning, Machine Learning and AI at your desktop

  • Intel Xeon W
  • Up to 18 Cores
  • Support up to 512GB DDR4 REG.ECC Memory
  • Supports up to 4 GPUs: GTX, Titan V or Quadro

The DEVCUBE Pilot is a beginner GPU workstation equipped to power and manage core cognitive technology platforms, including Machine Learning (ML), Artificial Intelligence (AI) and Deep Learning (DL).

  • Intel Core i9-7900X
  • Supports up to 128GB DDR4 Non-ECC, Un-buffered Memory
  • Supports up to 2 GPUs – GEFORCE RTX 2080 Ti or TITAN RTX with NVLINK™ BRIDGE

NVIDIA DGX POD™

NVIDIA DGX POD™ offers a proven design approach for building your GPU-accelerated AI data center with NVIDIA DGX-1, leveraging NVIDIA’s best practices and insights gained from real-world deployments.

The DGX POD™ is an optimised data centre rack containing up to nine DGX-1 servers, twelve storage servers, and three networking switches to support single and multi-node AI model training and inference using NVIDIA AI software.

The DGX POD™ is also designed to be compatible with  leading storage and networking technology providers. XENON offers a portfolio of NVIDIA DGX POD™  reference architecture solutions including: NetApp, IBM Spectrum, DDN and Pure Storage. All incorporate the best of NVIDIA DGX POD™ and are delivered as fully-integrated and ready-to-deploy solutions to make your data centre AI deployments simpler and faster.

For more information email us at info@xenon.com.au

XENON NVIDIA DGX POD

NVIDIA DGX POD™

IBM® Power System™ Accelerated Compute Servers

IBM® Power System™ Accelerated Compute Servers deliver unprecedented performance for modern HPC, analytics, and artificial intelligence (AI). Enterprises can now deploy data-intensive workloads, like Deep Learning frameworks and accelerated databases, with confidence.

model no.

description

key features

Enterprise-ready – PowerAI DL frameworks

  • Faster I/O: up to 5.6x more I/O bandwidth than x86 servers
  • The best GPUs: 2-6 NVIDIA® Tesla® V100 GPUs with NVLink
  • Extraordinary CPUs: 2x POWER9 CPUs, designed for AI
  • Simplest AI architecture: Share RAM across CPUs & GPUs

XENON GPU Clusters

The NVIDIA® Tesla® architecture is a massively parallel platform that utilizes high-performance GPU cards and advanced interconnect technologies to accelerate time-to-insight. With XENON you can customize an NVIDIA® GPU cluster solution that fits your precise use case or application needs. XENON’s cluster solutions are powered by NVIDIA® Tesla® P100, K80, M40, M60, M4 and M6 cards and can help your company capitalise on the data explosion by  processing large or compute-intensive workloads without increasing the power budget or physical footprint of your data center. Contact XENON today for your customised GPU cluster  solution.

XENON GPU Clusters

NVIDIA® Jetson Embedded Platforms

NVIDIA Jetson is the world’s leading visual computing platform for GPU-accelerated parallel processing in the mobile embedded systems market. Its high-performance, low-energy computing for Deep Learning and computer vision makes Jetson the ideal solution for compute-intensive embedded projects like:

  • Drones
  • Autonomous Robotic Systems
  • Mobile Medical Imaging

NVIDIA® Jetson Embedded Platforms

model no.

description

key features

Jetson Xavier™ NX brings supercomputer performance to the edge in a small form factor system on module (SOM).

  • 21 TOPS
  • 384-core NVIDIA Volta™ GPU with 48 Tensor Cores
  • 6-core NVIDIA Carmel ARM®v8.2 64-bit CPU
    6MB L2 + 4MB L3
  • 8 GB 128-bit LPDDR4x
    51.2GB/s

The new NVIDIA® Jetson™ TX2 is a high-performance, low-power supercomputer on a module that provides extremely quick, accurate AI inferencing in everything from robots and drones to enterprise collaboration devices and intelligent cameras.

  • NVIDIA Pascal™, 256 CUDA cores
  • HMP Dual Denver 2/2 MB L2 + Quad ARM® A57/2 MB L2
  • 4K x 2K 60 Hz Encode (HEVC) 4K x 2K 60 Hz Decode (12-Bit Support)
  • 8 GB 128 bit LPDDR4 59.7 GB/s

NVIDIA® Jetson™ Xavier is the latest addition to the Jetson platform. It’s an AI computer for autonomous machines, delivering the performance of a GPU workstation in an embedded module under 30W.

  • 512-core Volta GPU with Tensor Cores
  • (2x) NVDLA Engines DL Accelerator
  • 8-core ARMv8.2 64-bit CPU, 8MB L2 + 4MB L3
  • 16GB 256-bit LPDDR4x | 137 GB/s

Enables the development of millions of new small, low-cost, low-power AI systems.

  • NVIDIA Maxwell™ architecture with 128 NVIDIA CUDA® cores
  • Quad-core ARM® Cortex®-A57 MPCore processor
  • 4 GB 64-bit LPDDR4
  • 16 GB eMMC 5.1 Flash

NVIDIA® GPU Software

CUDA PARALLEL COMPUTING PLATFORM

CUDA® is a parallel computing platform and programming model invented by NVIDIA®. It enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU). With millions of CUDA-enabled GPUs sold to date, software developers, scientists and researchers are using GPU-accelerated computing for broad-ranging applications.

CUDA 8 gives developers direct access to powerful new Pascal features such as Unified Memory and lightening fast peer-to-peer communication using NVLink. Also included in this release is a new graph analytics library

nvGRAPH which can be used for fraud detection, cyber security, and logistics analysis, expanding the application of GPU acceleration in the realm of big data analytics.

NVIDIA® GPU Software