,

NVIDIA Tesla P100 Price Analysis

Close-up image of the NVIDIA Tesla P100 SXM2 GPU

Now that NVIDIA has launched their new Pascal GPUs, the next question is “What is the Tesla P100 Price?”

Although it’s still a month or two before shipments of P100 start, the specifications and pricing of Microway’s Tesla P100 GPU-accelerated systems are available. If you’re planning a new project for delivery later this year, we’d be happy to help you get on board. These new GPUs are exceptionally powerful.

Tesla P100 Price

The table below gives a quick breakdown of the Tesla P100 GPU price, performance and cost-effectiveness:

Tesla GPU modelPriceDouble-Precision Performance (FP64)Dollars per TFLOPS
Tesla P100 PCI-E 12GB$5,899*4.7 TFLOPS$1,255
Tesla P100 PCI-E 16GB$7,374*4.7 TFLOPS$1,569
Tesla P100 SXM2 16GB$9,428*5.3 TFLOPS$1,779

* single-unit price before any applicable discounts

As one would expect, the price does increase for the higher-end models with more memory and NVLink connectivity. However, the cost-effectiveness of these new P100 GPUs is quite clear: the dollars per TFLOPS of the previous-generation Tesla K40 and K80 GPUs are $2,342 and $1,807 (respectively). That makes any of the Tesla P100 GPUs an excellent choice. Depending upon the comparison, HPC centers should expect the new “Pascal” Tesla GPUs to be as much as twice as cost-effective as the previous generation. Additionally, the Tesla P100 GPUs provide much faster memory and include a number of powerful new features.

You may wish to reference our Comparison of Tesla “Pascal” GPUs, which summarizes the technical improvements made in these new GPUs and compares each of the new Tesla P100 GPU models. If you’re looking to see how these GPUs will be deployed in production, read our Tesla GPU Clusters page. As always, please feel free to reach out to us if you’d like to get a better understanding of these latest HPC systems and what they can do for you.

You May Also Like

  • HPC Tech Tips

    Microway Achieves DGX SuperPOD Specialization Partner Status with NVIDIA

    We’re excited to share that Microway has officially achieved the prestigious NVIDIA DGX SuperPOD™ Specialization Partner Status with NVIDIA, to deliver AI factories.  This designation recognizes Microway’s in-house expertise in architecting, building, testing, and delivering advanced AI deployments – and scaling some of the world’s most powerful AI infrastructure.  Read more about the announcement NVIDIA DGX…

  • Hardware

    DGX A100 review: Throughput and Hardware Summary

    When NVIDIA launched the Ampere GPU architecture, they also launched their new flagship system for HPC and deep learning – the DGX 100. This system offers exceptional performance, but also new capabilities. We’ve seen immediate interest and have already shipped to some of the first adopters. Given our early access, we wanted to share a…

  • HPC Tech Tips

    Deploying GPUs for Classroom and Remote Learning

    As one of NVIDIA’s Elite partners, we see a lot of GPU deployments in higher education. GPUs have been proving themselves in HPC for over a decade, and they are the de-facto standard for deep learning research. They’re also becoming essential for other types of machine learning and data science. But GPUs are not always…