Enquire about this solution

GPU Clusters

GPU computing is the process of offloading parallel computational tasks from the CPU to the Graphical Processing Unit (GPU). GPU computing was started by NVIDIA in 2007, and XENON build the first GPU supercomputer cluster in Australia in 2008. Modern GPU architecture includes a mix of processors optimised for specific workloads – from straight, traditional graphics processing, to matrix multiplication tasks in artificial intelligence workloads.

The extremely high processing power of GPUs must be matched by equally fast and low latency fabric, switches, storage and appropriate CPU’s in order to maximise the potential of the GPU cluster. Mutliple GPUs in a single server create additional issues in heat generation and power draw in the data centre. They synergy between these components, and their positive and negative effects, must be considered when designing a GPU cluster.

Modern GPU Clusters are a sub-set of High Performance Computing clusters. These HPC Clusters are tuned to specific workloads, with specific types and quantities of GPUs, CPUs, appropriate interconnect fabric as well capacity and speed of the storage.

XENON continues to be the Australian innovator in GPU clusters, from delivering the first in 2008, to the more recent work delivering a new GPU Supercomputer for Pawsey in WA in 2019. From industrial design systems, smaller AI applications,  through to large scale HPC-GPU clusters with 100’s or 1000’s of nodes – XENON delivers a complete and fully tested turn-key solution tailored to the customer’s specific requirements.

Contact XENON today to learn more or start scoping your GPU cluster solution.

Talk to a Solutions Architect