Fastest Accelerated HPC and Big Data
Built with the Leading Servers for HPC & AI
Microway’s IBM POWER clusters are designed for leaders in AI and HPC to realize extraordinary application performance. The IBM POWER processor supports the tight integration of accelerators into the POWER8 and POWER9 processors, superior memory bandwidth, and enables radically simpler GPU and FPGA accelerator programming than on competing platforms.
Installed Software
A Microway preconfigured cluster comes preinstalled with:
- Linux distribution of your choice, including Red Hat, CentOS or Ubuntu
- NVIDIA CUDA Toolkit, Libraries and SDK
- Microway Cluster Management Software (MCMS™) or IBM Spectrum Cluster Manager
- IBM XL or Portland Group Compilers (PGI)
Sample Microway IBM POWER Cluster Specifications
World's Simplest GPU Programming & GPU Performance
Two POWER9 with NVLink CPUs and 4/6 Tesla V100 GPUs with AC922 compute nodes
Sample Cluster Size | 20 Power Systems AC922 Nodes (64 GPUs and 32 CPUs) in a 42U Cabinet |
---|---|
Base Platform | Power Systems AC922 with Tesla V100 with NVLink nodes World’s first CPU: Tesla GPU Coherence— POWER9 CPU and Tesla V100 GPU share same memory space Only Platform with CPU:GPU NVLink—No PCI-E data bottleneck between POWER9 CPU and Tesla GPU |
GPUs per Node | (4) Tesla V100 SXM2.0 (air cooled) (6) Tesla V100 SXM2.0 (liquid cooled) |
System Memory per Node | Up to 2TB DDR4 |
Total GPU Memory per Node |
64GB (4 GPU node, air cooled) 96GB (6 GPU node, liquid cooled) |
Head Node | Dual POWER9 Server (1U – 2U) with up to 2TB memory |
Storage | Head Node: up to 216 TB Compute Nodes: up to 4 TB Optional Storage Servers or Parallel HPC Storage System |
Network | Dual Gigabit Ethernet built-in Optional 10Gb or 100Gb Ethernet |
Interconnect (optional) | ConnectX-6 200Gb HDR or ConnectX-5 100Gb EDR InfiniBand Fabric |
Cabinet | 42U APC NetShelter Cabinet |
Green HPC Features | High-efficiency (80PLUS Platinum-Level) power supplies Software/firmware to reduce power consumption on idle cores Optional liquid cooling of nodes Optional liquid-cooled rack doors (for thermally-neutral HPC) |
Hadoop/HPDA
Two CPUs, 176 threads per Node, designed for Big Data workloads
Total Cluster Thread Count | Up to 3520 threads |
---|---|
Sample Cluster Size | One fully-integrated 42U rackmount cabinet with 20 Power Systems LC922 Nodes |
Base Platform | Power Systems LC922 |
CPUS/Cores/Threads Per Node | 2x 16, 20, or 22 core POWER9 Processors 4 Threads Per Core, up to 176 threads per node |
System Memory per Node | Up to 2TB DDR4 |
Head Node | Dual POWER9 Server (1U – 2U) with up to 1TB memory |
Storage | Head Node: up to 216 TB Compute Nodes: up to 216 TB Optional Storage Servers or Parallel HPC Storage System |
Network | Quad 10 Gigabit Ethernet built-in Optional 25Gb/50Gb/100Gb Ethernet |
Interconnect (optional) | Mellanox Low-Latency Ethernet with RDMA Support |
Cabinet | 42U APC NetShelter Cabinet |
Green HPC Features | High-efficiency (80PLUS Platinum-Level) power supplies Software/firmware to reduce power consumption on idle cores Optional liquid cooling of nodes Optional liquid-cooled rack doors (for thermally-neutral Computing) |
Virtualized Density
Two CPUs, 160 threads per Node, in a 1U Footprint
Total Cluster Thread Count | Up to 6400 threads |
---|---|
Sample Cluster Size | One fully-integrated 42U rackmount cabinet with 40 Power Systems LC921 Nodes |
Base Platform | Power Systems LC921 |
CPUs/Cores/Threads Per Node | 2x 16 or 20 core POWER9 Processors 4 Threads Per Core, up to 160 threads per node |
System Memory per Node | Up to 2TB DDR4 |
Head Node | Dual POWER9 Server (1U – 2U) with up to 2TB memory |
Storage | Head Node: up to 216 TB Compute Nodes: up to 72 TB Optional Storage Servers or Parallel HPC Storage System |
Network | Quad 10 Gigabit Ethernet built-in Optional 25Gb/50Gb/100Gb Ethernet |
Interconnect (optional) | Mellanox Low-Latency Ethernet with RDMA Support |
Cabinet | 42U APC NetShelter Cabinet |
Green HPC Features | High-efficiency (80PLUS Platinum-Level) power supplies Software/firmware to reduce power consumption on idle cores Optional liquid cooling of nodes Optional liquid-cooled rack doors (for thermally-neutral Computing) |
Call a Microway Sales Engineer for Assistance : 508.746.7341 or
Click Here to Request More Information.
Supported for Life
Our technicians and sales staff consistently ensure that your entire experience with Microway is handled promptly, creatively, and professionally.
Telephone support is available for the lifetime of your cluster by Microway’s experienced technicians. After the initial warranty period, hardware warranties are offered on an annual basis. Out-of-warranty repairs are available on a time & materials basis.