Tesla P100

GPU Test Drive

Verify the benefits of GPU-acceleration for your workloads

GPU-Accelerated Applications Available for Testing


Available Libraries

  • NVIDIA CUDA versions 6.0, 6.5, 7.0, 7.5, 8.0
  • NVIDIA cuDNN v2, v3, v4
  • FFTW3 (single and double precision builds)
  • HDF5
  • OpenBLAS
  • OpenCV
  • Python 2.7.9 with H5py, NumPy, pandas, PyCUDA, pydot, scikit-image, scikit-learn, SciPy, SymPy, Theano and more

MPI & Compiler Software

  • MVAPICH2 versions 2.0+
  • OpenMPI versions 1.8.x+
  • GNU GCC Compiler Collection (multiple versions, as needed)
    Provides C, C++ and Fortran compilers.
  • Intel Parallel Studio XE Cluster Edition (multiple versions, as needed)
    Provides C, C++ and Fortran Compilers; Integrated Performance Primitives (IPP), Math Kernel Library (MKL), Clik Plus, Threading Building Blocks (TBB), MPI Library, MPI Benchmarks, Trace Analyzer & Collector, VTune Amplifier XE, Inspector XE, Advisor XE
  • PGI Accelerator Fortran/C/C++ Server (multiple versions, as needed)
    Provides Portland Group C, C++ and Fortran compilers. GPU-acceleration is supported via CUDA Fortran and OpenACC.

Systems Available for Testing

Photograph of Microway HPC clusters of various sizes and configurations
Microway offers a Linux and Windows benchmark cluster for customers to test GPU-enabled applications. The cluster includes:

  • Microway NumberSmasher GPU Nodes
  • Four NVIDIA Tesla P100 GPUs per node (M40, K80, K40 and K20 are also available)
  • Professional Graphics – NVIDIA Quadro M4000
  • Two 14-core Intel Xeon E5-2690v4 series “Broadwell” CPUs in each node
  • 256GB DDR4 memory in each node
  • Intel Direct I/O with PCI-E 3.0 support
  • FDR InfiniBand HCAs and switching
  • Over 18 TFLOPS single and 9 TFLOPS double-precision GPU performance per node
  • CentOS Linux or Windows 8.1*
  • Pre-configured GPU-enabled software packages
  • Alternate test configurations available upon request.

*Windows 8.1 users must provide their own applications.


Your Information

Name (required)


E-mail (required)



How Did You Hear About Us


Benchmark Details


Operating System

Timeframe for Testing

Additional Requirements/Comments

Why GPUs?

Unlike traditional CPUs, which focus on general-purpose software applications, Tesla GPUs are designed specifically to provide the highest compute performance possible. For many applications, a GPU-accelerated system will be 5X to 25X times faster than a CPU-only system:

Chart comparing the performance of NVIDIA Tesla P100 PCIe GPUs versus Tesla K80 GPUs for HPC applications

The Tesla P100 GPUs are the latest and fastest accelerators. Based on the Pascal architecture, they feature:

Improved compute performance per GPU

Up to 5.3 TFLOPS double- and 10.6 TFLOPS single-precision floating-point performance

Faster GPU memory

High-bandwidth HBM2 memory provides a 3X improvement over older GPUs

Faster connectivity

NVLink provides 5X faster transfers than PCI-Express

Pascal Unified Memory

Allows GPU applications to directly access the memory of all GPUs and all of system memory

Direct CPU-to-GPU NVLink connectivity

OpenPOWER systems support NVLink tranfers between the CPUs and GPUs

Try today on advanced, fully integrated hardware

Whether you use community-built code or have in-house GPU-accelerated applications, we are offering remote benchmarking time on our latest hardware. This includes NVIDIA Tesla P100, K80, and M40 GPUs with over 3X the performance of previous Tesla GPUs.

See how fast your code can run

To log in and test your code, register above. After registration, you will receive an email with instructions. For any questions, please email wespeakhpc@microway.com.

Tesla GPU Accelerated Applications

NVIDIA Tesla GPU compute processors accelerate many common scientific codes – AMBER, NAMD and LAMMPS are just a few of the applications enjoying significant speed-ups. You can run your own code or one of the preloaded applications.

Read Our Blog on GPU Benchmarking

Comments are closed.