Tesla P100 GPU Servers for High-Performance Technical Computing

NVIDIA Tesla P100 GPUs are enabling a new level of performance for HPC and technical computing workloads. Microway is proud to offer a variety of performance-turned GPU systems with a host of new capabilities. Features include:
cuda_cube

  • Improved compute performance per GPU
    Up to 5.3 TFLOPS double- and 10.6 TFLOPS single-precision floating-point performance
  • Faster GPU memory
    High-bandwidth HBM2 memory provides a 3X improvement over older GPUs
  • Faster connectivity
    NVLink provides 5X faster transfers than PCI-Express
  • Pascal Unified Memory
    Allows GPU applications to directly access the memory of all GPUs and all of system memory
  • Direct CPU-to-GPU NVLink connectivity
    OpenPOWER systems support NVLink transfers between the CPUs and GPUs

The Tesla P100 GPUs are built upon NVIDIA’s latest “Pascal” architecture. Unlike previous generations, these GPUs are available in two flavors:

  • SXM2 with NVLink – a new form factor specifically designed for speed and performance
  • Standard PCI-Express – a cost-effective and backwards compatible GPU

Microway servers and clusters are available with both varieties. Our experts are available to assist you in determining which is best suited for your workloads and to customize the system for optimal performance. There are many choices, but our experts ensure you’ll receive the ideal configuration.

Tesla P100 GPU Servers with NVLink Connectivity

With the new “Pascal” GPU architecture, NVIDIA introduces NVLink connectivity, which operates up to five times faster than PCI-Express. This offers significant performance benefits to any application running across multiple GPUs. OpenPOWER systems also enable NVLink connectivity between the system CPUs and the GPUs, further reducing bottlenecks as data moves through the system.

NumberSmasher 1U Tesla GPU Server with GPU-to-GPU NVLink
Block Diagram image of the NumberSmasher 1U Tesla GPU Server with NVLink Microway’s GPU server with NVLink provides two CPUs and four GPUs within a compact 1U rackmount server platform. These systems enable the highest-density compute available on the market, and leverage the latest HPC and connectivity technologies.
(2) Xeon CPUs (4) Tesla GPUs with full NVLink connectivity
NVIDIA DGX-1 Deep Learning System with GPU-to-GPU NVLink
Block diagram of an NVIDIA x86 Server System with 8 Tesla P100 GPUs connected in a Hybrid Cube Mesh As Deep Learning enters the mainstream, NVIDIA DGX-1 is uniquely positioned to provide the best performance when training neural networks and running production-scale classification workloads. To be successful, data scientists and artificial intelligence researchers require quick iterations of their neural network models. The NVIDIA DGX-1 delivers the fastest performance available in the world.
(2) Xeon CPUs (8) Tesla GPUs with NVLink hybrid cube mesh
OpenPOWER 2U Server with CPU-to-GPU and GPU-to-GPU NVLink
Block diagram drawing of the Microway OpenPOWER GPU Server with NVLink GPUs Microway’s OpenPOWER GPU server with NVLink provides two IBM POWER8 CPUs and four NVIDIA Tesla P100 GPUs within a 2U rackmount server platform. These systems provide an intelligently-balanced platform for next-generation high performance computing clusters.
(2) POWER8 CPUs (4) Tesla GPUs with CPU-to-GPU NVLink

Summary of available Tesla P100 GPU platforms supporting NVLink

Feature NumberSmasher NVIDIA DGX-1 OpenPOWER
CPUs (2) Intel Xeon (2) Intel Xeon (2) IBM POWER8+NVLink
System Memory up to 1TB 512GB up to 1TB
GPUs (4) Tesla P100 SXM2 (8) Tesla P100 SXM2 (4) Tesla P100 SXM2
Total GPU Memory 64GB 128GB 64GB
GPU NVLink fully-connected hybrid cube mesh two GPUs per CPU
CPU-to-GPU NVLink supported
Rack height 1U 3U 2U
view view view

Tesla P100 GPU Servers with PCI-Express Connectivity

Tesla P100 is also available with standard PCI-Express connectivity. These are available in cost-effective configurations with one to ten GPUs per server. Several recommendations are shown below – view the full list on our Tesla GPU Servers page.

NumberSmasher 1U Tesla GPU Server with 3 GPUs
Diagram of the compute components in the NumberSmasher 1U Tesla GPU Server with Three GPUs The 3-GPU 1U NumberSmasher server provides two CPUs and three GPUs within a compact 1U rackmount server platform. These systems are popular due to their cost-effective design and compatibility with almost all existing datacenter racks.
(2) Xeon CPUs (3) Tesla GPUs with PCI-Express connectivity
NumberSmasher 1U Tesla GPU Server with 4 GPUs
Diagram of the compute components in the NumberSmasher 1U Tesla GPU Server with Four GPUs Microway’s 4-GPU 1U server provides two CPUs and four GPUs within a compact 1U rackmount server platform. These systems enable the highest-density PCI-Express based compute clusters.
(2) Xeon CPUs (4) Tesla GPUs with PCI-Express connectivity
Octoputer 4U Server with 8 GPUs
Block diagram of the Octoputer 8-GPU server Microway’s Octoputer 8-GPU server is a cost-effective design supporting large numbers of GPUs, as well as many CPU cores, large memory, and high-speed storage.
(2) Xeon CPUs (8) Tesla GPUs with PCI-Express connectivity
Octoputer 4U Server with up to 10 GPUs
Block diagram of the Octoputer server configured for GPU-Direct RDMA The single-root-complex Octoputer server provides the largest number of GPUs (up to 10) on a single PCI-Express tree. This is an ideal platform for applications which make full use of many GPU accelerators and those that support GPU-Direct RDMA.
(2) Xeon CPUs (10) Tesla GPUs with single-root PCI-Express connectivity

Summary of Tesla P100 PCI-Express GPU platforms*

Feature 1U 3-GPU 1U 4-GPU 4U 8-GPU 4U 10-GPU
CPU (2) Intel Xeon
System Memory up to 1TB up to 1.5TB
GPU (3) Tesla P100 (4) Tesla P100 (8) Tesla P100 (10) Tesla P100
Total GPU Memory 48GB 64GB 128GB 160GB
Drive Bays (4) Hot-Swap 2.5″ (2) Hot-Swap 2.5″ (24) Hot-Swap 2.5″
Rack Height 1U 4U
view view view view

* Microway offers a wide variety of systems – view the full list of Tesla GPU Servers

Test Tesla P100 GPUs for Yourself

Microway maintains a state-of-the-art benchmark cluster with the latest HPC systems and components. Whether you are interested in Tesla SXM2 GPUs with NVLink or Tesla PCI-Express GPUs; Intel Xeon x86 CPUs or IBM POWER8+NVLink CPUs, we can help you evaluate their performance. See for yourself how the systems compare – sign up to test drive the new Tesla P100 GPUs before you purchase.

Comments are closed.