Performance Architecture for Next-Generation HPC
Microway NumberSmasher™ Dense Clusters with Xeon Scalable Processors
NumberSmasher clusters integrate the latest Intel Scalable Processors, “Cascade Lake-SP.” They scale from small departmental clusters to large-scale, shared HPC resources. Years of Microway Linux cluster design and integration expertise ensure your cluster arrives with superior performance and functionality compared to competing hardware. Our experts would be happy to guide you if you have any questions!
For dense High Performance Computing clusters, we provide up to 4480 processor cores in a standard 42U rackmount cabinet. Our NumberSmasher Twin2 servers include four dual-processor compute nodes (totaling 224 cores) in a 2U chassis: effectively doubling rack capacity.
Microway also offers standard 1U nodes with a density of up to 56 processor cores per rack unit. Up to 2240 cores can be achieved in a single cabinet. Customers requiring increased storage or I/O may choose 2U or 4U nodes with reduced cabinet density.
Finally, NumberSmasher QuadPuter 4P nodes offer an SMP-based Xeon Scalable Processor platform in a 2U form factor. These systems include up to 112 cores, 6 TB of DDR4 memory. They are ideally suited for large memory nodes.
Installed Software
Microway NumberSmasher clusters come preinstalled with:
- Linux distribution of your choice, including Red Hat, SuSE, Debian, Fedora and Gentoo
- Bright Cluster Manager, OpenHPC, or Microway Cluster Management Software (MCMS™) integrated with optional MPI Link-Checker™ Fabric Validation Suite
- Intel Composer or Cluster Studio XE tools & Intel Math Kernel Library (MKL)
- Portland Group Compilers (PGI)
- User-level applications and libraries optional
Sample Microway Xeon Cluster Specifications
Lowest Cost & Highest Density
Up to 224 processor cores per 2U NumberSmasher Twin2 Server chassis (112 cores per 1U)
Sample Cluster Size | 80 Nodes (160 CPUs) in a 42U rackmount cabinet |
---|---|
Base Platform | 2U Twin2 Rackmount Server with redundant power supplies Four nodes, each with (2) Intel Xeon Scalable Processors |
System Memory per Node | Up to 2 TB DDR4 2933 or 2666MHz 1.50 TB DDR4 2933 with 12 DIMMs is recommended maximum |
Head Node | Dual Xeon Server (1U – 4U) with up to 3 TB memory |
Storage | Head Node: up to 648 TB Compute Nodes: up to 54 TBOptional Storage Servers or Parallel HPC Storage System |
GPUs | Not typically compatible internally. Mix and match nodes from: Microway’s GPU-optimized High Performance Computing Clusters |
Network | Dual Gigabit Ethernet built-in Optional 10/40G, 25G Ethernet |
Interconnect (optional) | ConnectX-5 100Gb EDR InfiniBand, Intel Omni-Path 100Gb Fabric |
Cabinet | 42U APC NetShelter Cabinet |
Green HPC Features | High-efficiency (80PLUS Platinum-Level) power supplies Optional low-power Intel CPUs Software/firmware to reduce power consumption on idle cores Cooling paradigms designed to reduce power consumption |
Balanced & Flexible
Up to 56 processor cores and 1.50 TB memory per 1U node
Sample Cluster Size | 40 Nodes (80 CPUs) in a 42U rackmount cabinet |
---|---|
Base Platform | 1U Rackmount Server with optional redundant power supplies (2) Intel Xeon Scalable Processors |
System Memory per Node | 3 TB DDR4 2933 or 2666MHz |
Head Node | Dual Xeon Server (1U – 4U) with up to 3 TB memory |
Storage | Head Node: up to 648 TB Compute Nodes: up to 72 TBOptional Storage Servers or Parallel HPC Storage System |
GPUs (optional) | NVIDIA Tesla V100 or T4 GPU Compute Processors Various node types from Microway’s GPU-optimized HPC Clusters |
Network | Dual Gigabit Ethernet built-in Optional 10/40G, 25/50/100G Ethernet |
Interconnect (optional) | ConnectX-6 100Gb HDR, ConnectX-5 100Gb EDR InfiniBand, Intel Omni-Path 100Gb Fabric |
Cabinet | 42U APC NetShelter Cabinet |
Green HPC Features | High-efficiency (80PLUS Platinum-Level) power supplies Optional low-power Intel CPUs Software/firmware to reduce power consumption on idle cores Cooling paradigms designed to reduce power consumption |
Most Memory Per Server
Up to 112 processor cores and 6 TB memory per node with NumberSmasher QuadPuter
Sample Cluster Size | 20 Nodes (80 CPUs) in a 42U rackmount cabinet |
---|---|
Base Platform | Quadputer 2U Rackmount Server (4) Intel Xeon Scalable Processor “Cascade Lake-SP” CPUs |
System Memory per Node | Up to 6 TB DDR4 2933 or 2666MHz |
Head Node | Dual or Quad Xeon Server (1U – 4U) with up to 6 TB memory |
Storage | Head Node: up to 648 TB Compute Nodes: up to 54 TBOptional Storage Servers or Parallel HPC Storage System |
Network | Dual Gigabit Ethernet built-in Optional 10/40G, 25/50/100G Ethernet |
Interconnect (optional) | ConnectX-6 200Gb HDR, ConnectX-5 100Gb EDR InfiniBand, Intel Omni-Path 100Gb Fabric |
Cabinet | 42U APC NetShelter Cabinet |
Green HPC Features | High-efficiency (80PLUS Platinum-Level) power supplies Optional low-power Intel CPUs Software/firmware to reduce power consumption on idle cores Cooling paradigms designed to reduce power consumption |
Max Cores & Memory BW Per 2P Server
Up to 192 processor cores per NumberSmasher 2U Twin Maxx Server chassis (Two Servers in 2U)
Sample Cluster Size | 40 Nodes (80 CPUs) in a 42U rackmount cabinet |
---|---|
Base Platform | NumberSmasher 2U Twin Maxx Server chassis (Two Servers in 2U) with redundant power supplies Two nodes, each with (2) 32/48-core Intel Xeon Scalable Platinum 9200 Series Processors |
System Memory per Node | 3 TB DDR4 2933MHz |
Head Node | Dual Xeon Server (1U – 4U) with up to 3 TB memory |
Storage | Head Node: up to 648 TB Compute Nodes: up to 36 TBOptional Storage Servers or Parallel HPC Storage System |
GPUs | Not typically compatible internally. Mix and match nodes from: Microway’s GPU-optimized HPC Clusters |
Network | Dual Gigabit Ethernet built-in |
Interconnect (optional) | Intel Omni-Path 100Gb Fabric, ConnectX-6 200Gb HDR, ConnectX-5 100Gb EDR InfiniBand |
Cabinet | 42U APC NetShelter Cabinet |
Green HPC Features | Optional Liquid Cooling (contact Microway for details) |
Call a Microway Sales Engineer for Assistance : 508.746.7341 or
Click Here to Request More Information.
Supported for Life
Our technicians and sales staff consistently ensure that your entire experience with Microway is handled promptly, creatively, and professionally.
Telephone support is available for the lifetime of your cluster by Microway’s experienced technicians. After the initial warranty period, hardware warranties are offered on an annual basis. Out-of-warranty repairs are available on a time & materials basis.