Benefits

GPU-accelerated computing is the use of a graphics processing unit (GPU) together with a CPU to accelerate scientific, engineering, and enterprise applications. Pioneered in 2007 by NVIDIA®, GPUs now power energy-efficient datacentres in government labs, universities, enterprises, and small-and-medium businesses around the world.

How Applications Accelerate with GPUs?
GPU-accelerated computing offers unprecedented application performance by offloading compute-intensive portions of the application to the GPU, while the remainder of the code still runs on the CPU. From a user’s perspective, applications simply run significantly faster.

NVIDIA® Tesla® GPU accelerators

Accelerate your most demanding HPC, hyperscale, and enterprise data center workloads with NVIDIA® Tesla® GPU accelerators. Scientists can now crunch through petabytes of data up to 10x faster than with CPUs in applications ranging from energy exploration to deep learning.

NVIDIA® Tesla® GPU accelerators

model no.

description

key features

The world’s most advanced data center GPU ever built to accelerate AI, HPC, and graphics.

  • six GPC (Graphics Processing Clusters)
  • 84 Volta streaming multiprocessor units
  • 5376 CUDA cores on the complete die
  • Higher clocks and higher power efficiency

The most advanced data center accelerator ever built, a brand new GPU architecture to deliver the world’s fastest compute node.

  • NVIDIA® Pascal™ architecture
  • 18.7 TFLOPS of FP16 performance
  • 9.3 TFLOPS of FP32 performance
  • 4.7 TFLOPS of FP64 performance
  • 16GB of high performance HBM2 memory

Powered by NVIDIA® Pascal™ architecture and purpose-built to boost efficiency for scale-out servers running deep learning workloads.

  • Small form-factor, 50/75-Watt design fits any scale-out server.
  • INT8 operations slash latency by 15X.
  • Hardware-decode engine capable of transcoding and inferencing 35 HD video streams in real time.

With 47 TOPS (Tera-Operations Per Second) of inference performance and INT8 operations per GPU, a single server with 8 Tesla P40s delivers the performance of over 140 CPU servers

  • The world’s fastest processor for inference workloads
  • 47 TOPS of INT8 for maximum inference throughput and responsiveness
  • Hardware-decode engine capable of transcoding and inferencing 35 HD video streams in real time

Designed specifically for data centers that are virtualising desktop graphics. It’s a dual-slot PCI Express form factor for rack and tower servers capable of supporting 32 concurrent users.

  • 4,096 NVIDIA®CUDA® cores
  • 16GB of GDDR5 memory
  • 7.4 TFLOPS single-precision performance
  • GRID 2.0 vGPU software support

XENON GPU Servers

XENON rackmount GPU servers and systems are fully optimised for the latest GPU computing modules and deliver 10X higher application performance than the latest multi-core CPU systems, providing unprecedented real-world scalability and breakthrough power-efficiency across a wide array of HPC applications.

XENON GPU Servers

model no.

description

key features

The NVIDIA® DGX-1™ is the world’s first purpose-built system for artificial intelligence (AI) and deep learning with fully integrated hardware and software that can be deployed quickly and easily.

  • 170 TFLOPS of FP16 peak performance
  • 8x NVIDIA® Tesla® P100 GPU accelerators
  • 16GB of HBM2 memory per GPU
  • NVIDIA NVLink™ Interconnect

The new NVIDIA® DGX-1 is similar to the previous generation offering based on Pascal, but is powered by eight Tesla V100s GPUs, linked together via next-gen NVIDIA® NVLink interconnect technology.

  • Dual, 20-core Intel® Xeon® E5-2698 CPUs
  • 512GB of RAM
  • four 1.92TB SSDs in RAID 0
  • a pair of 10GbE connections

The Personal Supercomputer for Leading-Edge AI Development.

  • 3x the performance for deep learning training
  • 100x in speed-up on large data set analysis, compared with a 20 node Spark server cluster
  • 5x increase in I/O performance over PCIe-connected GPU’s with NVIDIA NVLink technology

Powering 10 GPUs into a 4U chassis.

  • Dual Intel® Xeon® processor E5-2600 v4 family
  • Up to 1.5TB DDR4 ECC Registered DIMM
  • Support for 10x Double Width Passive GPUs
  • Support for GTX and Titan X Active GPUs
  • Innovative single root complex architecture

The NITRO™ GX48 Rack Server supports up to 8x NVIDIA® Tesla accelerators.

  • Dual Intel® Xeon® Processor E5-2600 v4 family
  • Up to 1TB DDR4 ECC Registered DIMM
  • Support for 8x Double Width Passive GPUs
  • Support for GTX and Titan X Active GPUs
  • 24x 2.5” hot-swap Drive bays

The NITRO™ G7 Workstation can harness the increased in-memory capabilities of 64-Bit Processing to reduce the times required to deliver highly complex computation processes.

  • Single Intel® Xeon® processor E5-2600 v4 family
  • Up to 512GB DDR4 ECC Registered DIMM
  • Support for 2x Double Width Passive GPUs
  • Support for GTX and Titan X Active GPUs

The NITRO™ GX17 Rack Server supports up to 4x NVIDIA® Tesla accelerators in a compact 1U form factor.

  • Dual Intel® Xeon® Processor E5-2600 v4 family
  • Up to 1TB DDR4 ECC Registered DIMM
  • Support for 4x Double Width Passive GPUs
  • 2x 2.5” hot-swap Drive bays

The NITRO GK17 4GPU rack server with four Tesla P100 GPUs offers a total of 64GB of high-bandwidth GPU memory.

  • Dual socket R3 (LGA 2011) supports Intel® Xeon® processor E5-2600 v4/ v3 family
  • Up to 512GB ECC 3DS LRDIMM, up to 1TB ECC RDIMM
  • Up to 4 Tesla P100 SXM2
  • 2x 2.5″ Hot-swap drive bays, 2x 2.5″ internal drive bays

The NITRO™ G27 Rack Server supports up to 6x NVIDIA® Tesla accelerators.

  • Dual Intel® Xeon® Processor E5-2600 v4 family
  • Up to 1TB DDR4 ECC Registered DIMM
  • Support for 6x Double Width Passive GPUs
  • 10x 2.5” hot-swap Drive bays

XENON GPU Personal SuperComputers

Turn your standard workstations into powerful personal supercomputers and receive cluster level performance right at your desk. Graphics Processing Units (GPUs) are outstanding at delivering performance where massively parallel floating point calculations are required.

XENON’s NITRO™ range of personal supercomputers are equipped with NVIDIA® Tesla® GPUs and the CUDA® architecture, to deliver breakthrough performance for parallel computing applications.

XENON GPU Personal SuperComputers

model no.

description

key features

A state of the art deskside personal supercomputer that supports up to 4GPUs. It delivers unmatched graphics compute per cubic centimeter, and providing the highest visual compute density enabling breakthrough levels of capability and productivity.

  • Dual Intel® Xeon® Processor E5-2600 v4 family
  • Up to 1TB DDR4 ECC Registered DIMM
  • Support for 4x Double Width Passive GPUs
  • Support for GTX and Titan X Active GPUs
  • 8x 3.5” hot-swap Drive bays

Optimised for Animation/Visualization and High Performance Computing environments that require intensive processing power for visualisation, data-modelling, media production and design.

  • Dual Intel® Xeon® Processor E5-2600 v4 family
  • Up to 1TB DDR4 ECC Registered DIMM
  • Support for 3x Double Width Active GPUs
  • Support for GTX and Titan X Active GPUs

Optimised for Visual and High Performance Computing environments that require intensive processing power for visualisation, data-modelling, media production and design.

  • Single Intel® Xeon® E5-2600/1600 v4 family
  • Up to 512GB DDR4 ECC Registered DIMM
  • Support for 3x Double Width Active GPUs
  • Support for GTX and Titan X Active GPUs

DEVCUBE combines four high performance GPUs with 7 TFlops of single precision, 336.5 GB/s of memory bandwidth, and 12GB of memory per board

  • Intel® Core i7-5960X processor
  • 32GB DDR4-2400 Memory
  • Up to 4x GTX 1080 / Titan X GPU’s
  • Liquid Cooled for office environments

POWER8 with NVLink + Tesla P100

IBM® Power Systems™ S822LC for High Performance Computing pairs the strengths of the POWER8 CPU with 4 NVIDIA® Tesla® P100 GPUs. These best-in-class processors are tightly bound with NVIDIA NVLink Technology from CPU:GPU—to advance the performance, programmability, and accessibility of accelerated computing and resolve the PCI-E bottleneck.

model no.

description

key features

Tackle new problems with NVIDIA® Tesla® P100 on the only architecture with CPU:GPU NVLink

  • Two POWER8® CPUs and 4 Tesla P100 with NVLink GPUs in a versatile 2U Linux server
  • New possibilities with POWER8 with NVLink—the only architecture with NVIDIA NVLink Technology from CPU to GPU
  • Designed for accelerated workloads in HPC, the enterprise datacenter, and accelerated cloud deployments

XENON GPU Clusters

The NVIDIA® Tesla® architecture is a massively parallel platform that utilizes high-performance GPU cards and advanced interconnect technologies to accelerate time-to-insight. With XENON you can customize an NVIDIA® GPU cluster solution that fits your precise use case or application needs. XENON’s cluster solutions are powered by NVIDIA® Tesla® P100, K80, M40, M60, M4 and M6 cards and can help your company capitalise on the data explosion by  processing large or compute-intensive workloads without increasing the power budget or physical footprint of your data center. Contact XENON today for your customised GPU cluster  solution.

XENON GPU Clusters

NVIDIA® Jetson Embedded Platforms

NVIDIA Jetson is the world’s leading visual computing platform for GPU-accelerated parallel processing in the mobile embedded systems market. Its high-performance, low-energy computing for deep learning and computer vision makes Jetson the ideal solution for compute-intensive embedded projects like:

  • Drones
  • Autonomous Robotic Systems
  • Mobile Medical Imaging

NVIDIA® Jetson Embedded Platforms

model no.

description

key features

The new NVIDIA® Jetson™ TX2 is a high-performance, low-power supercomputer on a module that provides extremely quick, accurate AI inferencing in everything from robots and drones to enterprise collaboration devices and intelligent cameras.

  • NVIDIA Pascal™, 256 CUDA cores
  • HMP Dual Denver 2/2 MB L2 +
    Quad ARM® A57/2 MB L2
  • 4K x 2K 60 Hz Encode (HEVC)
    4K x 2K 60 Hz Decode (12-Bit Support)
  • 8 GB 128 bit LPDDR4
    59.7 GB/s

NVIDIA® GPU Software

CUDA PARALLEL COMPUTING PLATFORM

CUDA® is a parallel computing platform and programming model invented by NVIDIA®. It enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU). With millions of CUDA-enabled GPUs sold to date, software developers, scientists and researchers are using GPU-accelerated computing for broad-ranging applications.

CUDA 8 gives developers direct access to powerful new Pascal features such as Unified Memory and lightening fast peer-to-peer communication using NVLink. Also included in this release is a new graph analytics library

nvGRAPH which can be used for fraud detection, cyber security, and logistics analysis, expanding the application of GPU acceleration in the realm of big data analytics.

NVIDIA® GPU Software