NVIDIA Tesla Pascal Boost

Boost Up to Tesla Servers

Modern data centers are key to solving some of the world’s most important scientific and big data challenges using high performance computing (HPC) and artificial intelligence (AI). NVIDIA® Tesla® accelerated computing platform provides these modern data centers with the power to accelerate HPC and AI workloads. NVIDIA Pascal GPU-accelerated servers deliver breakthrough performance with fewer servers resulting in faster scientific discoveries and insights and dramatically lower costs. With over 400 HPC applications GPU optimized in a broad range of domains, including 10 of the top 10 HPC applications and all deep learning frameworks, every modern data center can save money with Tesla platform.

No matter the workload, Microway has a solution to fit your needs.

The above configurations fit the requirements of 90% of our customers, but you may also speak with one of our experts if you have questions or unique requirements.

NVIDIA Tesla P100

NVIDIA Tesla P100 GPUs are enabling a new level of performance for HPC and technical computing workloads. Microway is proud to offer a variety of performance-turned GPU systems with a host of new capabilities.

  • Improved compute performance per GPU
    Up to 5.3 TFLOPS double- and 10.6 TFLOPS single-precision floating-point performance
  • Faster GPU memory
    High-bandwidth HBM2 memory provides a 3X improvement over older GPUs
  • Faster connectivity
    NVLink provides 5X faster transfers than PCI-Express
  • Pascal Unified Memory
    Allows GPU applications to directly access the memory of all GPUs and all of system memory
  • Direct CPU-to-GPU NVLink connectivity
    OpenPOWER systems support NVLink transfers between the CPUs and GPUs

The Tesla P100 GPUs are built upon NVIDIA’s latest “Pascal” architecture. Unlike previous generations, these GPUs are available in two flavors:

  • SXM2 with NVLink – a new form factor specifically designed for speed and performance
  • Standard PCI-Express – a cost-effective and backwards compatible GPU

NVIDIA Tesla P100 Specifications

Feature Tesla P100 SXM2 16GB Tesla P100 PCI-E 16GB Tesla P100 PCI-E 12GB
GPU Chip(s) Pascal GP100
Integer Operations (INT8)*
Half Precision (FP16)* 21.2 TFLOPS 18.7 TFLOPS
Single Precision (FP32)* 10.6 TFLOPS 9.3 TFLOPS
Double Precision (FP64)* 5.3 TFLOPS 4.7 TFLOPS
On-die HBM2 Memory 16GB 12GB
Memory Bandwidth 732 GB/s 549 GB/s
L2 Cache 4 MB
Interconnect NVLink + PCI-E 3.0 PCI-Express 3.0
Theoretical transfer bandwidth 80 GB/s 16 GB/s
Achievable transfer bandwidth ~66 GB/s ~12 GB/s
# of SM Units 56
# of single-precision CUDA Cores 3584
# of double-precision CUDA Cores 1792
GPU Base Clock 1328 MHz 1126 MHz
GPU Boost Support Yes – Dynamic
GPU Boost Clock 1480 MHz 1303 MHz
Compute Capability 6.0
Workstation Support
Server Support yes
Wattage (TDP) 300W 250W

* Measured with GPU Boost enabled


 

NVIDIA Tesla P40

NVIDIA Tesla P40 GPUs replace the previous generation Maxwell M40 as the deep learning accelerator of choice. Increased throughput of 12 TeraFLOPS, 47 TOPS INT8 capability, and video transcoding capabilities elevate accuracy and responsiveness for any deep learning workload. Deliver a real-time interactive user experience with the NVIDIA Tesla P40 platform. Available in our many GPU platforms.

NVIDIA Tesla P40 Specifications

Feature Tesla P40 PCI-E 24GB
GPU Chip(s) Pascal GP102
Integer Operations (INT8)* 47 TOPS
Half Precision (FP16)*
Single Precision (FP32)* 12 TFLOPS
Double Precision (FP64)*
Onboard GDDR5 Memory 24GB
Memory Bandwidth 346 GB/s
L2 Cache 3 MB
Interconnect PCI-Express 3.0
Theoretical transfer bandwidth 16 GB/s
Achievable transfer bandwidth ~12 GB/s
# of SM Units 30
# of single-precision CUDA Cores 3840
GPU Base Clock 1303 MHz
GPU Boost Support Yes – Dynamic
GPU Boost Clock 1531 MHz
Compute Capability 6.1
Workstation Support
Server Support yes
Wattage (TDP) 250W

* Measured with GPU Boost enabled


 

NVIDIA Tesla P4

NVIDIA has also released the Tesla P4 Inferencing Accelerator. These accelerators are purpose-built for scale-out servers running deep learning workloads. Tesla P4 reduces inference latency by 15X while operating within a power efficient 50-75 Watt total power draw. Complete with all of the revolutionary benefits of the Pascal architecture, Tesla P4 delivers exceptional performance and power efficiency.

NVIDIA Tesla P4 Specifications

Feature Tesla P4 PCI-E 8GB
GPU Chip(s) GP104
Integer Operations (INT8)* 22 TOPS
Single Precision (FP32)* 5.5 TFLOPS
Onboard GDDR5 Memory 8GB
Memory Bandwidth 192 GB/s
L2 Cache 2 MB
Interconnect PCI-Express 3.0
Theoretical transfer bandwidth 16 GB/s
Achievable transfer bandwidth ~12 GB/s
# of SM Units 20
# of single-precision CUDA Cores 2560
GPU Base Clock 810MHz
GPU Boost Support Yes – Dynamic
GPU Boost Clock 1063MHz
Compute Capability 6.1
Server Support yes
Wattage (TDP) 50W/ 75W

* Measured with GPU Boost enabled


 

 

Need more information? Ready to move forward?

Your Information

Name (required)

Title

E-mail (required)

Organization

Industry

Please tell us more about your requirements:


Bookmark the permalink.

Comments are closed.