Supercharge your next cluster with Tesla V100, P100 GPUs
Microway NVIDIA® Tesla® GPU Powered High Density Clusters
Microway’s preconfigured Tesla GPU clusters deliver supercomputing performance at a lower power, lower cost, and using many fewer systems than standard CPU-only clusters. These clusters are powered by NVIDIA Tesla V100 “Volta” or Tesla P100 GPUs. Tesla GPUs speed the transition to energy-efficient parallel computing and scale to solve the world’s most important computing challenges more quickly and accurately.
Successfully deployed in demanding applications at research institutes, universities, and enterprises, Tesla GPUs power the most powerful supercomputers worldwide.
Installed Software
A Microway Tesla preconfigured cluster comes preinstalled with:
- Linux distribution of your choice, including Red Hat, CentOS, Ubuntu, Debian, openSUSE or Gentoo
- NVIDIA CUDA® Toolkit, Libraries and SDK
- Bright Cluster Manager, OpenHPC, or Microway Cluster Management Software (MCMS™) integrated with optional MPI Link-Checker™ Fabric Validation Suite
- User-level application and library installations are available for truly turn-key HPC
GPU Speedups
Applications running CUDA-based code on Tesla GPUs see speedups of up to 250x in domains ranging from MATLAB to computational fluid dynamics, molecular dynamics, quantum chemistry, imaging, signal processing and bio-informatics. See NVIDIA’s full list of GPU-accelerated applications.
GPU Software Development Tools
It is easier than ever to harness GPUs in your applications. Many libraries include pre-optimized GPU support; language bindings are available for Python, Java, C#, C, C++, Fortran and more. OpenACC allows a single C/C++/Fortran codebase to be used across GPUs, x86 CPUs, and POWER8 CPUs (much like a GPU-accelerated OpenMP). Visit NVIDIA’s GPU Software Development Tools site for more resources.
Sample Microway Tesla GPU Cluster Specifications
Two CPUs and Four NVIDIA NVLink™ GPUs with 1U NumberSmasher compute nodes
GPUs per Node | (4) Tesla V100 or P100 SXM2.0 |
---|---|
Sample Cluster Size | One fully-integrated 42U rackmount cabinet with 32 Nodes (128 GPUs) |
Base Platform | NumberSmasher 1U Tesla GPU Server with NVLink |
System Memory per Node | 2 TB DDR4 2666Mhz |
Total GPU Memory per Node |
128GB (with NVLink-connected Tesla V100) or 64GB (with Tesla V100 or P100) |
Head Node | Dual Xeon Server (1U – 4U) with up to 3 TB memory Optional NVIDIA Quadro™ Graphics or NVIDIA GRID™ |
Storage | Head Node: up to 432 TB Compute Nodes: up to 4 TB Optional Storage Servers or Parallel HPC Storage System |
Network | Dual Gigabit Ethernet built-in Optional 10Gb or 40Gb Ethernet |
Interconnect (optional) | ConnectX-4/5 100Gb EDR or 56Gb FDR InfiniBand Fabric |
Cabinet | 42U APC NetShelter Cabinet (extra-depth model required due to chassis depth) |
Green HPC Features | High-efficiency (80PLUS Platinum-Level) power supplies Software/firmware to reduce power consumption on idle cores Optional liquid-cooled rack doors (for thermally-neutral HPC) |
Two POWER9 with NVLink CPUs and 4/6 Tesla V100 GPUs with AC922 compute nodes
GPUs per Node | (4) Tesla V100 SXM2.0 (air cooled) (6) Tesla V100 SXM2.0 (liquid cooled) |
---|---|
Sample Cluster Size | One fully-integrated 42U rackmount cabinet with 18 Power Systems AC922 Nodes (64 GPUs and 32 CPUs) |
Base Platform | Power Systems AC922 with Tesla V100 with NVLink nodes World’s first CPU: Tesla GPU Coherence— POWER9 CPU and Tesla V100 GPU share same memory space Only Platform with CPU:GPU NVLink—No PCI-E data bottleneck between POWER9 CPU and Tesla GPU |
System Memory per Node | Up to 2TB DDR4 |
Total GPU Memory per Node |
64GB (4 GPU node, air cooled) 96GB (6 GPU node, liquid cooled) |
Head Node | Dual POWER9 Server (1U – 2U) with up to 1TB memory |
Storage | Head Node: up to 144 TB Compute Nodes: up to 4 TB Optional Storage Servers or Parallel HPC Storage System |
Network | Dual Gigabit Ethernet built-in Optional 10Gb/40Gb/100Gb Ethernet |
Interconnect (optional) | ConnectX-5/6 100Gb EDR or 200Gb HDR InfiniBand Fabric |
Cabinet | 42U APC NetShelter Cabinet |
Green HPC Features | High-efficiency (80PLUS Platinum-Level) power supplies Software/firmware to reduce power consumption on idle cores Optional liquid cooling of nodes Optional liquid-cooled rack doors (for thermally-neutral HPC) |
One CPU and Two GPUs with 1U NumberSmasher compute nodes
GPUs per Node | (2) Tesla V100 or P100 |
---|---|
Sample Cluster Size | One fully-integrated 42U rackmount cabinet with 32 Nodes (64 GPUs) |
Base Platform | 1U Rackmount Server |
System Memory per Node | 1.50 TB DDR4 |
Total GPU Memory per Node |
32GB or 64GB (Tesla V100) 32GB (Tesla P100) |
Head Node | Dual Xeon Server (1U – 4U) with up to 3 TB memory Optional NVIDIA Quadro Graphics or NVIDIA GRID |
Storage | Head Node: up to 432 TB Compute Nodes: up to 36 TB Optional Storage Servers or Parallel HPC Storage System |
Network | Dual Gigabit Ethernet built-in Optional 10Gb or 40Gb Ethernet |
Interconnect (optional) | ConnectX-4/5 100Gb EDR or 56Gb FDR InfiniBand Fabric |
Cabinet | 42U APC NetShelter Cabinet |
Green HPC Features | High-efficiency (80PLUS Platinum-Level) power supplies Software/firmware to reduce power consumption on idle cores Optional liquid-cooled rack doors (for thermally-neutral HPC) |
Two CPUs and up to three GPUs with 1U NumberSmasher compute nodes
GPUs per Node | (3) Tesla V100 or P100 GPUs |
---|---|
Sample Cluster Size | One fully-integrated 42U rackmount cabinet with 32 Nodes (128 GPUs) |
Base Platform | NumberSmasher 1U Tesla GPU Server |
System Memory per Node | 2 TB DDR4 |
Total GPU Memory per Node |
48GB or 96GB (Tesla V100) 48GB (Tesla P100) |
Head Node | Dual Xeon Server (1U – 4U) with up to 3 TB memory Optional NVIDIA Quadro Graphics or NVIDIA GRID |
Storage | Head Node: up to 432 TB Compute Nodes: up to 4 TB Optional Storage Servers or Parallel HPC Storage System |
Network | Dual Gigabit Ethernet built-in Optional 10Gb or 40Gb Ethernet |
Interconnect (optional) | ConnectX-4/5 100Gb EDR or 56Gb FDR InfiniBand Fabric |
Cabinet | 42U APC NetShelter Cabinet |
Green HPC Features | High-efficiency (80PLUS Platinum-Level) power supplies Software/firmware to reduce power consumption on idle cores Optional liquid-cooled rack doors (for thermally-neutral HPC) |
Two CPUs and up to Ten GPUs with 4U Octoputer compute nodes
GPUs per Node | (8 or 10) Tesla V100 or P100 Optional NVIDIA Quadro Graphics or NVIDIA GRID in additional slot |
---|---|
Sample Cluster Size | One fully-integrated 42U rackmount cabinet with 9 Nodes (72 GPUs for RDMA; 90 GPUs for Density) |
Base Platform | Octoputer 4U 8 GPU Server or Octoputer 4U 10-GPU Server with Single Root Complex for GPU-Direct |
System Memory per Node | 3 TB DDR4 |
Total GPU Memory per Node |
160GB or 320GB(Tesla V100) 128GB or 160GB (Tesla P100) |
Head Node | Dual Xeon Server (1U – 4U) with up to 3 TB memory Optional NVIDIA Quadro Graphics or NVIDIA GRID |
Storage | Head Node: up to 432 TB Compute Nodes: up to 120 TB Optional Storage Servers or Parallel HPC Storage System |
Network | Dual Gigabit Ethernet built-in Optional 10Gb or 40Gb Ethernet |
Interconnect (optional) | ConnectX-4/5 100Gb EDR or 56Gb FDR InfiniBand Fabric |
Cabinet | 42U APC NetShelter Cabinet |
Green HPC Features | High-efficiency (80PLUS Platinum/Titanium-Level) power supplies Software/firmware to reduce power consumption on idle cores Optional liquid-cooled rack doors (for thermally-neutral HPC) |
Additional GPU Server Options
Microway offers a wide variety of servers optimized for GPUs. Any of these may be custom-integrated and combined into a unique configuration to fit your requirements. View a full list of Microway’s GPU-Accelerated servers.
External GPU Acceleration Systems
An external GPU chassis allows addition of GPU compute capability to your existing HPC infrastructure. We partner with several vendors to supply GPU expansion systems compatible with almost any of your servers. These external systems support both PCI-Express generation 2.0 and generation 3.0 GPU devices. Up to 16 GPUs may be connected to a single server via 1-meter cables. Contact us to learn more about compatibility and pricing.
Supported for Life
Our technicians and sales staff consistently ensure that your entire experience with Microway is handled promptly, creatively, and professionally.
Telephone support is available for the lifetime of your cluster by Microway’s experienced technicians. After the initial warranty period, hardware warranties are offered on an annual basis. Out-of-warranty repairs are available on a time & materials basis.