Tesla P100

NVIDIA Tesla GPU Computing

Powering the world’s leading Supercomputers, NVIDIA Tesla GPUs deliver supercomputing performance at a lower power, lower cost, and using many fewer servers than standard CPU-only compute systems.

NVIDIA Elite Solution ProviderMicroway designs customized GPU clusters, servers, and WhisperStations based on NVIDIA Tesla and Quadro GPUs. We have been selected as the vendor of choice for a number of NVIDIA GPU Research Centers, including Carnegie Mellon University, Harvard, Johns Hopkins and Massachusetts General Hospital.

Tesla V100 – World’s Most Advanced Datacenter GPU, for AI & HPC

Integrated in Microway NumberSmasher and OpenPOWER GPU Servers & GPU Clusters

SpecificationsTesla V100 SXM 2.0 GPU

  • Up to 7.5 TFLOPS double- and 15 TFLOPS single-precision floating-point performance
  • NVIDIA “Volta” GPU architecture
  • 5120 CUDA cores, 620 Tensor Cores
  • 16GB of on-die HBM2 GPU memory
  • Memory bandwidth up to 900GB/s
  • NVLink or PCI-E x16 Gen3 interface to system
  • Available with enhanced NVLink interface, with 300GB/sec bi-directional bandwidth to the GPU
  • Passive heatsink only, suitable for specially-designed GPU servers

Tesla P100 – Strong Performance and Connectivity for HPC or AI

Integrated in Microway NumberSmasher and OpenPOWER GPU Servers & GPU Clusters

SpecificationsTesla P100 Socketed GPU

  • Up to 5.3 TFLOPS double- and 10.6 TFLOPS single-precision floating-point performance
  • NVIDIA “Pascal” GP100 graphics processing unit (GPU)
  • 3584 CUDA cores
  • 12GB or 16GB of on-die HBM2 CoWoS GPU memory
  • Memory bandwidth up to 732GB/s
  • NVLink or PCI-E x16 Gen3 interface to system
  • Passive heatsink only, suitable for specially-designed GPU servers

Tesla K80 – Density and Performance per Watt

Integrated in Microway NumberSmasher GPU Servers and GPU Clusters

SpecificationsNVIDIA Tesla K80

  • 5.6 TFLOPS single, 1.87 TFLOPS double precision
  • Two GK210 chips on a single PCB
  • 4992 CUDA cores, 2496 per chip
  • 24GB GDDR5 memory (12GB per chip)
  • Memory bandwidth up to 480GB/s
  • Dynamic GPU Boost for performance optimization
  • 8.74 TFLOPS single precision, 2.91 TFLOPS double precision with GPU Boost
  • PCI-E x16 Gen3 interface to system
  • Passive heatsink only, suitable for specially-designed GPU servers

Tesla P40 – Ideal for Deep Learning Inference

Integrated in Microway NumberSmasher GPU Servers and GPU Clusters

SpecificationsPhoto of the front of the NVIDIA Tesla P40 GPU

  • 12 TFLOPS single-precision floating point performance
  • 47 TOPS (tera-operations per second) for inference
  • NVIDIA “Pascal” GP102 graphics processing unit (GPU)
  • 3840 CUDA cores
  • 24GB GDDR5 memory with ECC protection
  • Memory bandwidth up to 346GB/s
  • Dynamic GPU Boost for performance optimization
  • PCI-E x16 Gen3 interface to system
  • Passive heatsink only, suitable for specially-designed GPU servers

Unique features available in the latest NVIDIA GPUs include:

  • NVIDIA GK110 DieHigh-speed, on-die GPU memory
  • NVLink interconnect speeds up data transfers up to 10X over PCI-Express
  • Unified Memory allows applications to directly access the memory of all GPUs and all of system memory
  • Direct CPU-to-GPU NVLink connectivity on OpenPOWER systems supports NVLink transfers between the CPUs and GPUs
  • ECC memory error protection – meets a critical requirement for computing accuracy and reliability in data centers and supercomputing centers.
  • System monitoring features – integrate the GPU subsystem with the host system’s monitoring and management capabilities such as IPMI. IT staff can manage the GPU processors in the computing system with widely-used cluster/grid management tools.

Many of the most popular applications already feature GPU support. Your own applications may take advantage of GPU acceleration through several different avenues:

  • “Drop-in” GPU-accelerated libraries – provide high-speed implementations of the functions your application currently executes on CPUs.
  • OpenACC / OpenMP Compiler directives – allow you to quickly add GPU acceleration to the most performance critical sections of your application while maintaining portability.
  • CUDA integrated with C, C++ or Fortran – provides maximum performance and flexibility for your applications. Third-party language extensions are available for a host of languages, including Java, Mathematica, MATLAB, Perl and Python.

Tesla GPU computing solutions fit seamlessly into your existing workstation or HPC infrastructure enabling you to solve problems orders-of-magnitude faster.

Call a Microway Sales Engineer for Assistance : 508.746.7341 or
Click Here to Request More Information.

Comments are closed.