NVIDIA GPU Compute and Accelerators

XENON nvidia elite partner badge

Data Centre to Edge Acceleration

NVIDIA® has developed a full range of GPU compute that allow you to accelerate HPC and AI workloads from the data centre to the field, or edge. XENON is an NVIDIA Elite partner for DGX™ Systems and a DGX™-Ready Service Partner – so when you purchase a DGX System from XENON you can rely on us for delivery, support and on-going services.

XENON also partners with major OEM manufacturers to deliver best-of-breed systems and NVIDIA Certified Workstations and Servers with custom configurations.

Overview of the NVIDIA range:

  • Data Centre GPUs. The full range of NVIDIA Data Centre GPUs are available from XENON. These PCIe based GPUs have a range of capabilities and it is possible to match the right GPU to specific applications, and budgets. See the full range here. 
  • DGX™ Systems. Complete solutions for any artificial intelligence, machine learning, or visualisation workload. DGX Systems are the foundation of high performance AI applications in the data centre. Current range includes DGX H100 and DGX A100. Check out the DGX Range here. Using high performance NVIDIA networking, five or more DGX Systems can be combined in a cluster to create SuperPods. These proven designs allow for quick scaling of DGX capabilities as the reference architectures are proven and tested.
  • EGX™ – Edge Computing – NVIDIA provides a complete solution to allow AI workloads to be processed at the edge. Using NVIDIA GPU Containers (NGC™), the AI models and training completed in the data centre can be run in the same containers in the NVIDIA edge devices both servers and the Jetson range of embedded devices. The Jetsons come in kit form and are a great extension to your AI capabilities, or a great starting point for exploring AI capabilities.
  • HGX™ Systems. These are 4 or 8 GPUs linked together with NVLink and delivered in certified servers. These are similar to DGX systems in capabilities, and are currently available with the H100, H200 or A100 GPUs.
  • MGX™ – Grace / Grace Hopper. Grace is the ARM based CPU NVIDIA has developed to marry CPU capabilities with their most advanced GPUs. Released in 2023, these systems provide the most integrated and powerful compute platform in compact server formats. The Grace Hopper is a combined CPU-GPU SuperChip architecture. The XENON ARGON range of servers are Grace / Grace Hopper systems – review the ARGON range here.
  • OVX™ Systems. Newly announced in August 2023, these server based systems from OEM partners include up to 8 L40S GPUs. The L40S is a powerful and agile GPU that can be used for AI workloads, or for Omniverse visualisation applications.

GPU Accelerated Applications

GPU-accelerated computing is the use of a graphics processing unit (GPU) together with a CPU to accelerate scientific, engineering, and enterprise applications. Pioneered in 2007 by NVIDIA, GPUs now power energy-efficient data centres in government labs, universities, enterprises, and small-and-medium businesses around the world. These GPU Computing systems are ideal for data analytics, artificial intelligence, and other visualisation workloads. GPUs for computational workloads are specially designed, and include features such as matrix multiplication, multi-core computations, and internal circuitry to ensuring the computers can take advantage of all the capabilities of the GPU. NVIDIA maintains a catalogue of GPU accelerated applications – download the catalogue.

How Applications Accelerate with GPUs?

GPU-accelerated computing offers unprecedented application performance by offloading compute-intensive portions of the application to the GPU, while the remainder of the code still runs on the CPU. From a user’s perspective, applications simply run significantly faster. XENON designs GPU Computing systems for optimal performance across the whole system – power supply, cooling, memory and internal CPU performance.

Browse this section to review the NVIDIA range of GPUs.

XENON Servers and Workstations

GPU accelerators are now a core part of the compute requirements for most applications and workloads. The question is not whether you need a GPU or not … but rather it is now how to best deploy GPUs, and which models of GPUs are best suited for your application, budget and timeframe.

XENON solution architects have decades of experience and can recommend the right system configuration for your workload.

Contact XENON today to start building the right system for your needs!

Talk to a Solutions Architect