NVIDIA® Tesla® V100 is the most advanced data center GPU ever built to accelerate AI, HPC, and graphics. It’s powered by NVIDIA Volta architecture, comes in 16 and 32GB configurations, and offers the performance of up to 100 CPUs in a single GPU. Data scientists, researchers, and engineers can now spend less time optimizing memory usage and more time designing the next AI breakthrough.


The NVIDIA Tesla V100 is available in the following formats:

  1. The SXM2 module, which requires servers such as: the XENON NITRO GK17, NVIDIA DGX-1V and  NVIDIA DGX Station.
  2. The PCIe card: “classic” GPU for servers with PCIe interface.

Please note: the PCIe version does not support NVLINK


Tesla® V100


Tesla V100 for NVLink

  • 7.8 teraFLOPS - Double Precision
  • 15.7 teraFLOPS - Single Precision
  • 125 teraFLOPS - Deep Learning
  • 300 GB/s - NVlink
  • 32/16 GB HBM2- Capacity
  • 900 GB/s - Bandwidth
  • 300 WATTS Max. Power Consumption

Tesla V100 for PCIe

  • 7 teraFLOPS - Double Precision
  • 14 teraFLOPS - Single Precision
  • 112 teraFLOPS - Deep Learning
  • 32 GB/s PCie
  • 32/16 GB HBM2- Capacity
  • 900 GB/s - Bandwidth
  • 250 WATTS Max. Power Consumption

Need a Quote?

Have questions about XENON’s products and solutions. Just ask. A knowledgeable Sales Specialist will get back to you shortly.

get a quote