Performance Architecture for Next-Generation HPC
Microway NumberSmasher™ Dense Clusters with Xeon Scalable Processors
NumberSmasher clusters integrate the latest 4th Gen Intel Scalable Processors, “Sapphire Rapids-SP.” They scale from small departmental clusters to large-scale, shared HPC resources. Years of Microway Linux cluster design and integration expertise ensure your cluster arrives with superior performance and functionality compared to competing hardware.
For dense High Performance Computing clusters, we provide up to 9600 processor cores in a standard 42U rackmount cabinet (liquid cooling required). Our NumberSmasher Twin2 servers include four dual-processor compute nodes (totaling 480 cores) in a 2U chassis: effectively doubling rack capacity. Deploy these systems with liquid cooling or air cooling with slightly lower core counts.
Microway also offers standard 1U nodes with a density of up to 120 processor cores per rack unit. Up to 4800 cores can be achieved in a single cabinet with air. Customers requiring increased storage or I/O may choose 2U or 4U nodes with reduced cabinet density.
Finally, NumberSmasher QuadPuter 4P nodes offer a 4 socket 4th Gen Xeon Scalable Processor platform in a 2U form factor. These systems include up to 240 4th Gen Xeon Scalable Processor cores, 16 TB of DDR5 memory. They are ideally suited for large memory nodes.
Installed Software
Microway NumberSmasher clusters come preinstalled with:
- Linux distribution of your choice, including Red Hat, Rocky Linux, Ubuntu, SuSE, Debian, or Gentoo
- Bright Cluster Manager, OpenHPC, or Microway Cluster Management Software (MCMS™) + Open OnDemand integrated with optional MPI Link-Checker™ Fabric Validation Suite
- Intel oneAPI Development Tools & Intel Math Kernel Library (MKL)
- User-level applications and libraries optional
Sample Microway Xeon Cluster Specifications
Lowest Cost & Highest Density
Up to 480 processor cores per 2U NumberSmasher Twin2 Server chassis (240 cores per 1U)
Sample Cluster Size | 80 Nodes (160 CPUs) in a 42U rackmount cabinet |
---|---|
Base Platform | 2U Twin2 Rackmount Server with redundant power supplies Four nodes, each with (2) Intel Xeon Scalable Processors. Liquid cooling required for 60-core SKU in this platform |
System Memory per Node | Up to 4 TB DDR5 4800Mhz |
Head Node | Dual Xeon Server (1U – 4U) with up to 8 TB memory |
Storage | Head Node: up to 648 TB Compute Nodes: up to 90 TB Optional Storage Servers or Parallel HPC Storage System |
GPUs | Not typically compatible internally. Mix and match nodes from: Microway’s GPU-optimized High Performance Computing Clusters |
Network | Dual Gigabit or 10G Ethernet Optional 25G/50G/100G Ethernet |
HPC Interconnect (optional) | Connect-7 400Gb NDR or ConnectX-6 200Gb HDR InfiniBand, Cornelis Omni-Path 100Gb Fabric |
Cabinet | 42U APC NetShelter Cabinet |
Green HPC Features | High-efficiency (80PLUS Platinum or Titanium-Level) power supplies Optional low-power Intel CPUs Software/firmware to reduce power consumption on idle cores Cooling paradigms designed to reduce power consumption |
Balanced & Flexible
Up to 120 processor cores and 8 TB memory per 1U node
Sample Cluster Size | 40 Nodes (80 CPUs) in a 42U rackmount cabinet. |
---|---|
Base Platform | 1U Rackmount Server with optional redundant power supplies (2) Intel Xeon Scalable Processors |
System Memory per Node | 8 TB DDR5 4800MHz |
Head Node | Dual Xeon Server (1U – 4U) with up to 8 TB memory |
Storage | Head Node: up to 648 TB Compute Nodes: up to 120 TB Optional Storage Servers or Parallel HPC Storage System |
GPUs (optional) | NVIDIA H100, A100, or A30 GPU Compute Processors Various node types from Microway’s GPU-optimized HPC Clusters |
Network | Dual Gigabit or 10G Ethernet Optional 25/50/100G Ethernet |
HPC Interconnect (optional) | Connect-7 400Gb NDR or ConnectX-6 200Gb HDR InfiniBand, Cornelis Omni-Path 100Gb Fabric |
Cabinet | 42U APC NetShelter Cabinet |
Green HPC Features | High-efficiency (80PLUS Platinum or Titanium-Level) power supplies Optional low-power Intel CPUs Software/firmware to reduce power consumption on idle cores Cooling paradigms designed to reduce power consumption |
Most Memory Per Server
Up to 240 processor cores and 8 TB memory per node with NumberSmasher QuadPuter
Sample Cluster Size | 20 Nodes (80 CPUs) in a 42U rackmount cabinet |
---|---|
Base Platform | Quadputer 2U Rackmount Server (4) Intel Xeon Scalable Processor CPUs |
System Memory per Node | Up to 16 TB DDR5 4800MHz |
Head Node | Dual or Quad Xeon Server (1U – 4U) with up to 16 TB memory |
Storage | Head Node: up to 648 TB Compute Nodes: up to 90 TB Optional Storage Servers or Parallel HPC Storage System |
Network | Dual Gigabit or 10G Ethernet Optional 25/50/100G Ethernet |
Interconnect (optional) | ConnectX-7 400Gb NDR or ConnectX-6 200Gb HDR InfiniBand, Cornelis Omni-Path 100Gb Fabric |
Cabinet | 42U APC NetShelter Cabinet |
Green HPC Features | High-efficiency (80PLUS Titanium-Level) power supplies Optional low-power Intel CPUs Software/firmware to reduce power consumption on idle cores Cooling paradigms designed to reduce power consumption |
Call a Microway Sales Engineer for Assistance : 508.746.7341 or
Click Here to Request More Information.
Supported for Life
Our technicians and sales staff consistently ensure that your entire experience with Microway is handled promptly, creatively, and professionally.
Telephone support is available for the lifetime of your cluster by Microway’s experienced technicians. After the initial warranty period, hardware warranties are offered on an annual basis. Out-of-warranty repairs are available on a time & materials basis.