Experience Maximum Inference Throughput
In the new era of AI and intelligent machines, Deep learning is shaping our world like no other computing model in history. GPUs powered by the revolutionary NVIDIA® Pascal™ architecture provide the computational engine for the new era of artificial intelligence, enabling amazing user experiences by accelerating deep learning applications at scale.

The NVIDIA® Tesla® P40 is purpose-built to deliver maximum throughput for Deep Learning  deployment. With 47 TOPS (Tera-Operations Per Second) of inference performance and INT8 operations per GPU, a single server with 8 Tesla P40s delivers the performance of over 140 CPU servers.

As models increase in accuracy and complexity, CPUs are no longer capable of delivering interactive user experience. The NVIDIA® Tesla® P40 delivers over 30X lower latency than a CPU for real-time responsiveness in even the most complex models.


| Print

Model Number

NVIDIA® Tesla® P40

GPU Architecture

NVIDIA® Pascal™

Single-Precision Performance

12 TeraFLOPS*

Integer Operations (INT8)

47 TOPS* (Tera-Operations per Second)

GPU Memory

24 GB

Memory Bandwidth

346 GB/s

System Interface

PCI Express 3.0 x16

Form Factor

4.4” H x 10.5” L, Dual Slot, Full Height

Max Power


Enhanced Programmability with Page Migration Engine


ECC Protection


Server-Optimized for Data Center Deployment


Hardware-Accelerated Video Engine

1x Decode Engine, 2x Encode Engine

Need a Quote?

Have questions about XENON’s products and solutions. Just ask. A knowledgeable Sales Specialist will get back to you shortly.

get a quote