Benefits

GPU-accelerated computing is the use of a graphics processing unit (GPU) together with a CPU to accelerate scientific, engineering, and enterprise applications. Pioneered in 2007 by NVIDIA®, GPUs now power energy-efficient datacentres in government labs, universities, enterprises, and small-and-medium businesses around the world.

How Applications Accelerate with GPUs?
GPU-accelerated computing offers unprecedented application performance by offloading compute-intensive portions of the application to the GPU, while the remainder of the code still runs on the CPU. From a user’s perspective, applications simply run significantly faster.

NVIDIA® Tesla® GPU accelerators

Accelerate your most demanding HPC, hyperscale, and enterprise data center workloads with NVIDIA® Tesla® GPU accelerators. Scientists can now crunch through petabytes of data up to 10x faster than with CPUs in applications ranging from energy exploration to deep learning.

NVIDIA® Tesla® GPU accelerators

model no.

description

key features

The world’s most advanced data center GPU ever built to accelerate AI, HPC, and graphics.

  • six GPC (Graphics Processing Clusters)
  • 84 Volta streaming multiprocessor units
  • 5376 CUDA cores on the complete die
  • Higher clocks and higher power efficiency

The most advanced data center accelerator ever built, a brand new GPU architecture to deliver the world’s fastest compute node.

  • NVIDIA® Pascal™ architecture
  • 18.7 TFLOPS of FP16 performance
  • 9.3 TFLOPS of FP32 performance
  • 4.7 TFLOPS of FP64 performance
  • 16GB of high performance HBM2 memory

Powered by NVIDIA® Pascal™ architecture and purpose-built to boost efficiency for scale-out servers running deep learning workloads.

  • Small form-factor, 50/75-Watt design fits any scale-out server.
  • INT8 operations slash latency by 15X.
  • Hardware-decode engine capable of transcoding and inferencing 35 HD video streams in real time.

With 47 TOPS (Tera-Operations Per Second) of inference performance and INT8 operations per GPU, a single server with 8 Tesla P40s delivers the performance of over 140 CPU servers

  • The world’s fastest processor for inference workloads
  • 47 TOPS of INT8 for maximum inference throughput and responsiveness
  • Hardware-decode engine capable of transcoding and inferencing 35 HD video streams in real time

Delivers up to 10x application performance compared to CPUs and up to 2.8x speed up compared to NVIDIA® Tesla® M2090.

  • 2,880 NVIDIA® CUDA® cores
  • 1.43 TFLOPS double-precision performance
  • 4.29 TFLOPS single-precision performance
  • 12GB of GDDR5 memory

Dramatically lowers datacentre cost by delivering application performance with fewer, more powerful servers.

  • 4992 NVIDIA® CUDA® cores
  • 2.91 TFLOPS double-precision performance
  • 8.73 TFLOPS single-precision performance
  • 24GB of GDDR5 memory

Designed specifically for data centers that are virtualising desktop graphics. It’s a dual-slot PCI Express form factor for rack and tower servers capable of supporting 32 concurrent users.

  • 4,096 NVIDIA®CUDA® cores
  • 16GB of GDDR5 memory
  • 7.4 TFLOPS single-precision performance
  • GRID 2.0 vGPU software support

Purpose-built for deep learning training and is the world’s fastest deep learning training accelerator for data center.

  • NVIDIA® Maxwell™ architecture
  • 3072 NVIDIA® CUDA® cores
  • 7 TFLOPS of single-precision performance
  • 24GB of GDDR5 memory

Low-power, small form factor GPU accelerator optimized for video transcoding, image processing, and machine learning inference that efficiently offloads demanding applications and boosts data center throughput.

  • NVIDIA® Maxwell™ architecture
  • 1024 NVIDIA® CUDA® cores
  • 2.2 TFLOPS of single-precison performance
  • 4 GB of GDDR5 memory
  • Low Profile 75W TDP design

XENON GPU Servers

XENON rackmount GPU servers and systems are fully optimised for the latest GPU computing modules and deliver 10X higher application performance than the latest multi-core CPU systems, providing unprecedented real-world scalability and breakthrough power-efficiency across a wide array of HPC applications.

XENON GPU Servers

model no.

description

key features

The NVIDIA® DGX-1™ is the world’s first purpose-built system for artificial intelligence (AI) and deep learning with fully integrated hardware and software that can be deployed quickly and easily.

  • 170 TFLOPS of FP16 peak performance
  • 8x NVIDIA® Tesla® P100 GPU accelerators
  • 16GB of HBM2 memory per GPU
  • NVIDIA NVLink™ Interconnect

The new NVIDIA® DGX-1 is similar to the previous generation offering based on Pascal, but is powered by eight Tesla V100s GPUs, linked together via next-gen NVIDIA® NVLink interconnect technology.

  • Dual, 20-core Intel® Xeon® E5-2698 CPUs
  • 512GB of RAM
  • four 1.92TB SSDs in RAID 0
  • a pair of 10GbE connections

The Personal Supercomputer for Leading-Edge AI Development.

  • 3x the performance for deep learning training
  • 100x in speed-up on large data set analysis, compared with a 20 node Spark server cluster
  • 5x increase in I/O performance over PCIe-connected GPU’s with NVIDIA NVLink technology

Powering 10 GPUs into a 4U chassis.

  • Dual Intel® Xeon® processor E5-2600 v4 family
  • Up to 1.5TB DDR4 ECC Registered DIMM
  • Support for 10x Double Width Passive GPUs
  • Support for GTX and Titan X Active GPUs
  • Innovative single root complex architecture

The NITRO™ GX48 Rack Server supports up to 8x NVIDIA® Tesla accelerators.

  • Dual Intel® Xeon® Processor E5-2600 v4 family
  • Up to 1TB DDR4 ECC Registered DIMM
  • Support for 8x Double Width Passive GPUs
  • Support for GTX and Titan X Active GPUs
  • 24x 2.5” hot-swap Drive bays

The NITRO™ G7 Workstation can harness the increased in-memory capabilities of 64-Bit Processing to reduce the times required to deliver highly complex computation processes.

  • Single Intel® Xeon® processor E5-2600 v4 family
  • Up to 512GB DDR4 ECC Registered DIMM
  • Support for 2x Double Width Passive GPUs
  • Support for GTX and Titan X Active GPUs

The NITRO™ GX17 Rack Server supports up to 4x NVIDIA® Tesla accelerators in a compact 1U form factor.

  • Dual Intel® Xeon® Processor E5-2600 v4 family
  • Up to 1TB DDR4 ECC Registered DIMM
  • Support for 4x Double Width Passive GPUs
  • 2x 2.5” hot-swap Drive bays

The NITRO™ G27 Rack Server supports up to 6x NVIDIA® Tesla accelerators.

  • Dual Intel® Xeon® Processor E5-2600 v4 family
  • Up to 1TB DDR4 ECC Registered DIMM
  • Support for 6x Double Width Passive GPUs
  • 10x 2.5” hot-swap Drive bays

XENON GPU Personal SuperComputers

Turn your standard workstations into powerful personal supercomputers and receive cluster level performance right at your desk. Graphics Processing Units (GPUs) are outstanding at delivering performance where massively parallel floating point calculations are required.

XENON’s NITRO™ range of personal supercomputers are equipped with NVIDIA® Tesla® GPUs and the CUDA® architecture, to deliver breakthrough performance for parallel computing applications.

XENON GPU Personal SuperComputers

model no.

description

key features

A state of the art deskside personal supercomputer that supports up to 4GPUs. It delivers unmatched graphics compute per cubic centimeter, and providing the highest visual compute density enabling breakthrough levels of capability and productivity.

  • Dual Intel® Xeon® Processor E5-2600 v4 family
  • Up to 1TB DDR4 ECC Registered DIMM
  • Support for 4x Double Width Passive GPUs
  • Support for GTX and Titan X Active GPUs
  • 8x 3.5” hot-swap Drive bays

Optimised for Animation/Visualization and High Performance Computing environments that require intensive processing power for visualisation, data-modelling, media production and design.

  • Dual Intel® Xeon® Processor E5-2600 v4 family
  • Up to 1TB DDR4 ECC Registered DIMM
  • Support for 3x Double Width Active GPUs
  • Support for GTX and Titan X Active GPUs

Optimised for Visual and High Performance Computing environments that require intensive processing power for visualisation, data-modelling, media production and design.

  • Single Intel® Xeon® E5-2600/1600 v4 family
  • Up to 512GB DDR4 ECC Registered DIMM
  • Support for 3x Double Width Active GPUs
  • Support for GTX and Titan X Active GPUs

DEVCUBE combines four high performance GPUs with 7 TFlops of single precision, 336.5 GB/s of memory bandwidth, and 12GB of memory per board

  • Intel® Core i7-5960X processor
  • 32GB DDR4-2400 Memory
  • Up to 4x GTX 1080 / Titan X GPU’s
  • Liquid Cooled for office environments

POWER8 with NVLink + Tesla P100

IBM® Power Systems™ S822LC for High Performance Computing pairs the strengths of the POWER8 CPU with 4 NVIDIA® Tesla® P100 GPUs. These best-in-class processors are tightly bound with NVIDIA NVLink Technology from CPU:GPU—to advance the performance, programmability, and accessibility of accelerated computing and resolve the PCI-E bottleneck.

model no.

description

key features

Tackle new problems with NVIDIA® Tesla® P100 on the only architecture with CPU:GPU NVLink

  • Two POWER8® CPUs and 4 Tesla P100 with NVLink GPUs in a versatile 2U Linux server
  • New possibilities with POWER8 with NVLink—the only architecture with NVIDIA NVLink Technology from CPU to GPU
  • Designed for accelerated workloads in HPC, the enterprise datacenter, and accelerated cloud deployments

XENON GPU Clusters

The NVIDIA® Tesla® architecture is a massively parallel platform that utilizes high-performance GPU cards and advanced interconnect technologies to accelerate time-to-insight. With XENON you can customize an NVIDIA® GPU cluster solution that fits your precise use case or application needs. XENON’s cluster solutions are powered by NVIDIA® Tesla® P100, K80, M40, M60, M4 and M6 cards and can help your company capitalise on the data explosion by  processing large or compute-intensive workloads without increasing the power budget or physical footprint of your data center. Contact XENON today for your customised GPU cluster  solution.

XENON GPU Clusters

NVIDIA® Jetson Embedded Platforms

NVIDIA Jetson is the world’s leading visual computing platform for GPU-accelerated parallel processing in the mobile embedded systems market. Its high-performance, low-energy computing for deep learning and computer vision makes Jetson the ideal solution for compute-intensive embedded projects like:

  • Drones
  • Autonomous Robotic Systems
  • Mobile Medical Imaging

NVIDIA® Jetson Embedded Platforms

model no.

description

key features

The NVIDIA® Jetson TK1 Developer Kit gives you everything you need to unlock the power of the GPU for embedded systems applications.

  • Built with NVIDIA® Tegra® K1 SoC
  • NVIDIA® Kepler™ GPU with 192 CUDA Cores
  • NVIDIA® 4-Plus-1™ Quad-Core
  • ARM® Cortex™-A15 CPU

Full-featured development platform for visual computing designed to get you up and running fast.

  • Pre-flashed with a Linux environment
  • NVIDIA® Maxwell™ GPU
  • 256 NVIDIA® CUDA® Cores
  • Quad-core ARM® Cortex®-A57 CPU

The new NVIDIA® Jetson™ TX2 is a high-performance, low-power supercomputer on a module that provides extremely quick, accurate AI inferencing in everything from robots and drones to enterprise collaboration devices and intelligent cameras.

  • NVIDIA Pascal™ Architecture GPU
  • 2 Denver 64-bit CPUs + Quad-Core A57 Complex
  • 8GB L128 bit DDR4 Memory
  • 32GB eMMC 5.1 Flash Storage
  • Connectivity to 802.11ac Wi-Fi and Bluetooth-Enabled Devices
  • 10/100/1000BASE-T Ethernet

NVIDIA® GPU Software

CUDA PARALLEL COMPUTING PLATFORM

CUDA® is a parallel computing platform and programming model invented by NVIDIA®. It enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU). With millions of CUDA-enabled GPUs sold to date, software developers, scientists and researchers are using GPU-accelerated computing for broad-ranging applications.

CUDA 8 gives developers direct access to powerful new Pascal features such as Unified Memory and lightening fast peer-to-peer communication using NVLink. Also included in this release is a new graph analytics library

nvGRAPH which can be used for fraud detection, cyber security, and logistics analysis, expanding the application of GPU acceleration in the realm of big data analytics.

NVIDIA® GPU Software