Description

The NVIDIA® DGX Station brings the incredible performance of an AI supercomputer in a workstation form factor that takes advantage of innovative engineering and a water-cooled system that runs whisper-quiet. The NVIDIA® DGX Station packs 480 TeraFLOPS of performance,with the first and only workstation built on four NVIDIA® Tesla® V100 accelerators, including innovations like next generation NVLink™ and new Tensor Core architecture.

This ground-breaking solution offers:

  • 3x the performance for deep learning training, compared with today’s fastest GPU workstations
  • 100x in speed-up on large data set analysis, compared with a 20 node Spark server cluster
  • 5x increase in I/O performance over PCIe-connected GPU’s with NVIDIA NVLink technology

Specifications

| Print

Specifications

NVIDIA® DGX Station

GPUs

4X Tesla V100

TFLOPS (GPU FP16)

480

GPU Memory

64 GB total system

NVIDIA® Tensor Cores

2,560

NVIDIA® CUDA® Cores

20,480

CPU

Intel® Xeon® E5-2698 v4 2.2 GHz (20-Core)

System Memory

256 GB LRDIMM DDR4

Storage

  • Data: 3X 1.92 TB SSD RAID 0
  • OS: 1X 1.92 TB SSD

Network

Dual 10 Gb LAN

Display

3X DisplayPort, 4K resolution

Acoustics

< 35 dB

System Weight

88 lbs / 40 kg

System Dimensions

  • Depth: 518mm
  • Width: 518mm
  • Height: 639mm

Maximum Power Requirements

1,500 W

Operating Temperature Range

10–30 °C

Software

  • Ubuntu Desktop Linux OS
  • DGX Recommended GPU Driver
  • CUDA Toolkit

Need a Quote?

Have questions about XENON’s products and solutions. Just ask. A knowledgeable Sales Specialist will get back to you shortly.

get a quote