Performance Architecture for Next-Generation HPC
Microway NumberSmasher™ Dense Clusters with Xeon Scalable Processors
NumberSmasher clusters integrate the latest Intel Scalable Processors, “Ice Lake-SP.” They scale from small departmental clusters to large-scale, shared HPC resources. Years of Microway Linux cluster design and integration expertise ensure your cluster arrives with superior performance and functionality compared to competing hardware.
For dense High Performance Computing clusters, we provide up to 6400 processor cores in a standard 42U rackmount cabinet. Our NumberSmasher Twin2 servers include four dual-processor compute nodes (totaling 320 cores) in a 2U chassis: effectively doubling rack capacity.
Microway also offers standard 1U nodes with a density of up to 80 processor cores per rack unit. Up to 3200 cores can be achieved in a single cabinet. Customers requiring increased storage or I/O may choose 2U or 4U nodes with reduced cabinet density.
Finally, NumberSmasher QuadPuter 4P nodes offer an SMP-based Xeon Scalable Processor platform in a 2U form factor. These systems include up to 112 2nd Gen Xeon Scalable Processor “Cascade Lake” cores, 12 TB of DDR4 memory. They are ideally suited for large memory nodes.
Installed Software
Microway NumberSmasher clusters come preinstalled with:
- Linux distribution of your choice, including Red Hat, Ubuntu, SuSE, Debian, Fedora and Gentoo
- Bright Cluster Manager, OpenHPC, or Microway Cluster Management Software (MCMS™) integrated with optional MPI Link-Checker™ Fabric Validation Suite
- Intel oneAPI Development Tools & Intel Math Kernel Library (MKL)
- User-level applications and libraries optional
Sample Microway Xeon Cluster Specifications
Lowest Cost & Highest Density
Up to 320 processor cores per 2U NumberSmasher Twin2 Server chassis (160 cores per 1U)
Sample Cluster Size | 80 Nodes (160 CPUs) in a 42U rackmount cabinet |
---|---|
Base Platform | 2U Twin2 Rackmount Server with redundant power supplies Four nodes, each with (2) Intel Xeon Scalable Processors. Liquid cooling required for 40-core SKU in this platform |
System Memory per Node | Up to 4 TB DDR4 3200Mhz |
Head Node | Dual Xeon Server (1U – 4U) with up to 8 TB memory |
Storage | Head Node: up to 648 TB Compute Nodes: up to 54 TB Optional Storage Servers or Parallel HPC Storage System |
GPUs | Not typically compatible internally. Mix and match nodes from: Microway’s GPU-optimized High Performance Computing Clusters |
Network | Dual Gigabit Ethernet Optional 10/40G, 25G Ethernet |
Interconnect (optional) | ConnectX-6 200Gb HDR InfiniBand, ConnectX-5 100Gb EDR InfiniBand, Omni-Path 100Gb Fabric |
Cabinet | 42U APC NetShelter Cabinet |
Green HPC Features | High-efficiency (80PLUS Platinum-Level) power supplies Optional low-power Intel CPUs Software/firmware to reduce power consumption on idle cores Cooling paradigms designed to reduce power consumption |
Balanced & Flexible
Up to 80 processor cores and 8 TB memory per 1U node
Sample Cluster Size | 40 Nodes (80 CPUs) in a 42U rackmount cabinet. |
---|---|
Base Platform | 1U Rackmount Server with optional redundant power supplies (2) Intel Xeon Scalable Processors Liquid cooling required for 40-core SKU in this platform |
System Memory per Node | 8 TB DDR4 3200MHz |
Head Node | Dual Xeon Server (1U – 4U) with up to 8 TB memory |
Storage | Head Node: up to 648 TB Compute Nodes: up to 72 TB Optional Storage Servers or Parallel HPC Storage System |
GPUs (optional) | NVIDIA A100, A30, or A10 GPU Compute Processors Various node types from Microway’s GPU-optimized HPC Clusters |
Network | Dual Gigabit Ethernet Optional 10/40G, 25/50/100G Ethernet |
Interconnect (optional) | ConnectX-6 200Gb HDR, ConnectX-5 100Gb EDR InfiniBand, Omni-Path 100Gb Fabric |
Cabinet | 42U APC NetShelter Cabinet |
Green HPC Features | High-efficiency (80PLUS Platinum-Level) power supplies Optional low-power Intel CPUs Software/firmware to reduce power consumption on idle cores Cooling paradigms designed to reduce power consumption |
Most Memory Per Server
Up to 112 processor cores and 12 TB memory per node with NumberSmasher QuadPuter
Sample Cluster Size | 20 Nodes (80 CPUs) in a 42U rackmount cabinet |
---|---|
Base Platform | Quadputer 2U Rackmount Server (4) Intel Xeon Scalable Processor “Cascade Lake-SP” CPUs |
System Memory per Node | Up to 12 TB DDR4 2933 or 2666MHz |
Head Node | Dual or Quad Xeon Server (1U – 4U) with up to 12 TB memory |
Storage | Head Node: up to 648 TB Compute Nodes: up to 54 TB Optional Storage Servers or Parallel HPC Storage System |
Network | Dual Gigabit Ethernet built-in Optional 10/40G, 25/50/100G Ethernet |
Interconnect (optional) | ConnectX-6 200Gb HDR, ConnectX-5 100Gb EDR InfiniBand, Intel Omni-Path 100Gb Fabric |
Cabinet | 42U APC NetShelter Cabinet |
Green HPC Features | High-efficiency (80PLUS Platinum-Level) power supplies Optional low-power Intel CPUs Software/firmware to reduce power consumption on idle cores Cooling paradigms designed to reduce power consumption |
Call a Microway Sales Engineer for Assistance : 508.746.7341 or
Click Here to Request More Information.
Supported for Life
Our technicians and sales staff consistently ensure that your entire experience with Microway is handled promptly, creatively, and professionally.
Telephone support is available for the lifetime of your cluster by Microway’s experienced technicians. After the initial warranty period, hardware warranties are offered on an annual basis. Out-of-warranty repairs are available on a time & materials basis.