Leadership HPC Performance
Microway Navion™ Clusters with AMD EPYC™ 9004 Series or 7003 Series Processors
Navion Clusters deliver leadership HPC performance. With superior floating point throughput, memory bandwidth, and I/O performance to the x86 competition. Users with compute-intensive applications including CFD and FEA, as well as high-performance database and custom HPC & AI codes choose Navion clusters for exceptional performance and power/space efficiency.
Installed Software
A Microway Navion preconfigured cluster comes preinstalled with:
- Linux distribution of your choice, including Red Hat, Rocky, Ubuntu, or SUSE
- NVIDIA CUDA Toolkit, Libraries and SDK
- Microway Cluster Management Software (MCMS™) integrated with optional MPI Link-Checker™ Fabric Validation Suite or NVIDIA Bright Cluster Manager
- User-level applications and libraries optional
Sample Microway AMD EPYC Cluster Specifications
High Density with EPYC 9004 Series
Up to 384 processor cores per rack unit with 2U Navion Twin2 compute nodes (liquid cooled)
Sample Cluster Size | 80 Nodes (160 CPUs) in a 42U rackmount cabinet |
---|---|
Base Platform | 2U Twin2 Rackmount Server with redundant power supplies Four nodes, each with (2) AMD EPYC 9004-series CPUs Liquid cooling may be required for high core count CPU SKUs |
System Memory per Node | Up to 6 TB DDR5 4800MHz |
Head Node | Dual AMD EPYC Server (1U – 4U) with up to 6 TB memory |
Storage | Head Node: up to 648 TB Compute Nodes: up to 108 TB Optional Storage Servers or Parallel HPC Storage System |
GPUs | Not typically compatible with dense node types |
Network | Dual Gigabit Ethernet built-in Optional 10G/40G/100G Ethernet |
Interconnect (optional) | ConnectX-7 400Gb NDR or ConnectX-6 200Gb HDR InfiniBand Fabric Cornelis Omni-Path Fabric |
Cabinet | 42U APC NetShelter Cabinet |
Green HPC Features | High-efficiency (80PLUS Titanium-Level) power supplies Software/firmware to reduce power consumption on idle cores Cooling paradigms designed to reduce power consumption |
Balanced Performance & Density
Up to 192 processor cores per rack unit with 1U Navion compute nodes
Sample Cluster Size | 40 Nodes (40 CPUs) in a 42U rackmount cabinet |
---|---|
Base Platform | Navion AMD EPYC 1U Server |
System Memory per Node | Up to 6 TB DDR5 4800MHz |
Head Node | Dual AMD EPYC Server (1U – 4U) with up to 6 TB memory |
Storage | Head Node: up to 648 TB Compute Nodes: up to 216 TB Optional Storage Servers or Parallel HPC Storage System |
GPUs | Not typically compatible |
Network | Optional 10G/25G/50G/100G Ethernet |
Interconnect (optional) | ConnectX-7 400Gb NDR or ConnectX-6 200Gb HDR InfiniBand Fabric Cornelis Omni-Path Fabric |
Cabinet | 42U APC NetShelter Cabinet |
Green HPC Features | High-efficiency (80PLUS Platinum-Level) power supplies Software/firmware to reduce power consumption on idle cores Cooling paradigms designed to reduce power consumption |
Storage Optimized (3.5")
Up to 96 processor cores per rack unit with 2U Navion compute nodes
Sample Cluster Size | 20 Nodes (40 CPUs) in a 42U rackmount cabinet |
---|---|
Base Platform | Navion AMD EPYC 2U Server |
System Memory per Node | Up to 6 TB DDR5 4800 MHz |
Head Node | Dual AMD EPYC Server (1U – 4U) with up to 6 TB memory |
Storage | Head Node: up to 648 TB Compute Nodes: up to 216 TB Optional Storage Servers or Parallel HPC Storage System |
GPUs | up to 2 in specialized configurations |
Network | 10G/25G/50G/100G/200G Ethernet Options |
High-Speed Interconnect (optional) | ConnectX-7 400Gb NDR or ConnectX-6 200Gb HDR InfiniBand Fabric Cornelis Omni-Path Fabric |
Cabinet | 42U APC NetShelter Cabinet |
Green HPC Features | High-efficiency (80PLUS Platinum-Level) power supplies Software/firmware to reduce power consumption on idle cores Cooling paradigms designed to reduce power consumption |
Dense EPYC 7003 Series
Up to 256 processor cores per rack unit with 2U Navion Twin2 compute nodes
Sample Cluster Size | 80 Nodes (160 CPUs) in a 42U rackmount cabinet |
---|---|
Base Platform | 2U Twin2 Rackmount Server with redundant power supplies Four nodes, each with (2) AMD EPYC 7003-series CPUs |
System Memory per Node | Up to 4 TB DDR4 3200MHz |
Head Node | Dual AMD EPYC Server (1U – 4U) with up to 4 TB memory |
Storage | Head Node: up to 648 TB Compute Nodes: up to 108 TB Optional Storage Servers or Parallel HPC Storage System |
GPUs | Not typically compatible with dense node types |
Network | Dual Gigabit Ethernet built-in Optional 10G/40G/100G Ethernet |
Interconnect (optional) | ConnectX-6 200Gb HDR or ConnectX-5 100Gb EDR InfiniBand Fabric Cornelis Omni-Path Fabric |
Cabinet | 42U APC NetShelter Cabinet |
Green HPC Features | High-efficiency (80PLUS Platinum-Level) power supplies Software/firmware to reduce power consumption on idle cores Cooling paradigms designed to reduce power consumption |
Call a Microway Sales Engineer for Assistance : 508.746.7341 or
Click Here to Request More Information.
Supported for Life
Our technicians and sales staff consistently ensure that your entire experience with Microway is handled promptly, creatively, and professionally.
Telephone support is available for the lifetime of your cluster by Microway’s experienced technicians. After the initial warranty period, hardware warranties are offered on an annual basis. Out-of-warranty repairs are available on a time & materials basis.