Artificial Intelligence Computing in Melbourne
Deep Learning, also referred to as Artificial Intelligence is the fastest-growing field in machine learning. It uses many-layered Deep Neural Networks (DNNs) to learn levels of representation and abstraction that make sense of data such as images, sound, and text. Deep learning is used widely in the research community to help solve many big data problems such as computer vision, speech recognition, and natural language processing.
Practical examples include:
- Vehicle, pedestrian and landmark identification for driver assistance
- Image recognition
- Speech recognition and translation
- Natural language processing
- Life sciences
XENON offers a variety of Deep Learning solutions which focus on GPU power due to its intensive parallel computation ability.
Download XENON’s Capability Statement brochure to learn more.
These solutions include:
NVIDIA® DGX range
XENON an Australian distributor of ; NVIDIA DGX-2, ™NVIDIA DGX-1™ and NVIDIA DGX Station supercomputer which enables data scientists and artificial intelligence (AI) researchers to acquire the accuracy, simplicity, and speed they need for deep learning success. Faster training and iteration ultimately means faster innovation and time-to-market.
The first 2 petaFLOPS system that combines 16 fully interconnected GPUs for 10X the Deep Learning performance.
- NVIDIA® Tesla V100 32GB, SXM3
- 16 Total GPUs for both boards, 512GB total HBM2 memory
- 12 Total NVSWitches
- 8 EDR Infiniband/100 GbE ethernet
The world’s first purpose-built system optimized for deep learning, with fully integrated hardware and software that can be deployed quickly and easily. Its revolutionary performance significantly accelerates training time, making it the world’s first deep learning supercomputer in a box.
- Built with groundbreaking Pascal™-powered NVIDIA® Tesla® P100 GPU accelerators, interconnected with NVIDIA® NVLink™
- Software stack includes major deep learning frameworks, the NVIDIA® Deep Learning SDK, the DIGITS™ GPU training system, drivers, and CUDA®,for designing the most accurate deep neural networks (DNN)
- Applications run up to 12x faster than any previous GPU-accelerated solutions
The Personal Supercomputer for Leading-Edge AI Development.
- 3x the performance for deep learning training
- 100x in speed-up on large data set analysis, compared with a 20 node Spark server cluster
- 5x increase in I/O performance over PCIe-connected GPU’s with NVIDIA NVLink technology
XENON with NVIDIA® have also developed their own Deep Learning solution the DEVCUBE which contains four high perfromance GPUs with 7 TFlops of single precision, 336.5GB/s of memory bandwidth with 12 GB of memory per board. All this results into:
- faster turnaround times
- the freedom to explore multiple network architectures
- accelerated dataset manipulation
- an all in one powerful, energy-efficient, cool, and quiet solution
XENON’S First Deep Learning Supercomputer in a box which contains four high performance GPUs.
- NVIDIA® DIGITS software providing powerful design, training, and visualisation
- Pre-installed standard Ubuntu 14.04 w/ Caffe, Torch, Theano, BIDMach, cuDNN v2, and CUDA 7.0
- A single deskside machine that plugs into standard wall plug socket
- DIGITS software package is now available in version 3
XENON’s new generation GPU optimised server provides the highest levels of parallel performance for Machine /Deep Learning workloads with 10 GPUs support in a single 4U chassis. Compared to a symmetric Dual Processor design, the new systems deliver 21% higher throughput and 60% lower latency* plus innovative single root complex architecture design. In addition, this server has been thermally optimised for either active or passively cooled GPUs without preheating, and are equipped with redundant 2000W Titanium Level (96%+ efficiency) digital power supplies for better reliability and lower TCO.
10 GPUs support in a single 4U chassis
- Dual Intel® Xeon® processor E5-2600 v4 family
- Up to 1.5TB DDR4 ECC Registered DIMM
- Support for 10x Double Width Passive GPUs
- Support for GTX and Titan X Active GPUs
- Innovative single root complex architecture
AIRI By Pure Storage
AIRI™ is the industry’s first complete AI-ready infrastructure, architected by Pure Storage® and NVIDIA® to extend the power of NVIDIA DGX™ systems, and enabling AI-at-scale for every enterprise. AIRI offers enterprises a simple, fast, and future-proof infrastructure to meet their AI demands – at any scale and is available in Australia and NZ from XENON.
AIRI and AIRI Mini
AIRI is a revolutionary end-to-end AI-at-scale infrastructure to address real-world challenges.
- 4x NVIDIA® DGX-1 servers
- NVIDIA® GPU Cloud Deep Learning Stack
- Enterprise-grade support
The smallest AI-ready data centre you can deploy, yet likely the most powerful.
- 2x NVIDIA® DGX-1 servers and Pure FlashBlade™ storage
- Performance of 25 racks of legacy infrastructure
- Offers effortless elastic scale
IBM® Power System™ Accelerated Compute Servers
IBM® Power System™ Accelerated Compute Servers deliver unprecedented performance for modern HPC, analytics, and artificial intelligence (AI). Enterprises can now deploy data-intensive workloads, like deep learning frameworks and accelerated databases, with confidence.
A leadership HPC and AI server with 2 POWER9 with Enhanced NVLink CPUs and 4-6 NVIDIA Tesla V100 GPUs in 2U.
- Faster I/O: up to 5.6x more I/O bandwidth than x86 servers
- The best GPUs: 2-6 NVIDIA® Tesla® V100 GPUs with NVLink
- Extraordinary CPUs: 2x POWER9 CPUs, designed for AI
- Simplest AI architecture: Share RAM across CPUs and GPUs
NVIDIA® TITAN graphic cards are groundbreaking. They give you the power to accomplish things you never thought possible.
NVIDIA® Titan V
NVIDIA® TITAN V is the most powerful graphics card ever created for the PC, driven by the world’s most advanced architecture
- NVIDIA® Volta Architecture
- 12 GB HBM2 Frame Buffer
- 1455 MHz Boost Clock
- 640 Tensor Cores
- 640 Tensor Cores
NVIDIA® TITAN Xp harnesses the incredible computing horsepower and groundbreaking NVIDIA Pascal™ architecture.
- Pascal GPU Architecture
- 12 GB G5X Frame Buffer
- 11.4 Gbps Memory Speed
- 1582 MHz Boost Clock
NVIDIA® Jetson is the world’s leading visual computing platform for GPU-accelerated parallel processing in the mobile embedded systems market. Its high-performance, low-energy computing for deep learning and computer vision makes Jetson the ideal solution for compute-intensive embedded projects.
It comes pre-flashed with a Linux environment, includes support for many common APIs, and is supported by NVIDIA’s complete development tool chain. The board also exposes a variety of standard hardware interfaces, enabling a highly flexible and extensible platform. This makes it ideal for all your applications requiring high computational performance in a low-power envelope.
For software updates and the developer SDK. The SDK includes an OS image that you will load onto your device, developer tools, supporting documentation, and code samples to help you get started.