8-GPU server with NVIDIA A100 with NVLink GPUs – HGX A100™
Microway’s Octoputer™ 8-GPU server with NVIDIA NVLink® Technology provides eight NVIDIA A100 Tensor Core GPUs and two 3rd Gen Xeon Scalable Processors within a powerful 4U rackmount server platform. This system is based on the NVIDIA HGX A100 8 GPU platform.
Leveraging NVIDIA’s Unified Memory technology and NVIDIA NVLink GPU interconnect, this system allows your applications to utilize the memory of all eight GPUs combined in a single memory space. The architecture is uniquely suited to AI Training and highly GPU accelerated HPC applications.
Each Octoputer server arrives fully integrated with the operating system of your choice, along with the NVIDIA CUDA toolkit. Your choice of libraries, tools, and software applications are available for an additional fee. All systems pass rigorous stress tests for over 48 hours prior to shipment.
Need scalable compute? Microway provides turn-key GPU clusters including with InfiniBand interconnects and GPU-Direct RDMA capability.
- 8 NVIDIA A100 GPUs with: 40GB of HBM2 or 80GB HBM2e memory, 3rd Gen NVIDIA NVLink Technology, and next generation Tensor Cores supporting TF32 instructions
- 6 NVIDIA NVSwitches for maximum GPU-GPU Bandwidth
- Full all-to-all communication with 600GB/sec of bandwidth per GPU
- Supports GPUDirect® RDMA over PCI-E x16 4.0 to Mellanox 200Gb HDR InfiniBand adapters
- Up to 80 cores from 2 3rd Gen Xeon Scalable Processors
- Up to 8 TB system memory
- Additional slots for HDR InfiniBand, 10G/100G Ethernet, and high-speed storage
- NVIDIA NGC-ready for easy containerized applications
- NVIDIA CUDA Toolkit installedand configured – ready to run GPU jobs!
- (2) Intel Xeon 3rd Gen Xeon Scalable Processor “Ice Lake-SP” CPUs (clock speed: up to 3.6 GHz)
- Eight NVIDIA A100 GPUs via SXM4 GPU modules, each with 300GB/s connectivity (600GB/s bidirectional)
- Eight-Channel DDR4 3200 Mhz ECC/Registered Memory (32 slots)
- Eight PCI-Express x16 4.0 low-profile, half-length slots for GPU-Direct RDMA enabled InfiniBand
- Up to (6) Hot-Swap 2.5” NVMe drives
- Two PCI-Express x16 4.0 low-profile, half-length slots for storage or additional fabric
- Removable Storage: rear USB 3.0 ports
- AIOM Slot for up to 2 optional 10 Gigabit Ethernet ports
- IPMI 2.0 with Dedicated LAN Support
- 6000W High-Efficiency Power with 2+2 Redundancy
- ConnectX-6 200Gb HDR or ConnectX-5 100G EDR InfiniBand, or 10G/25G/50G/100G Ethernet
- TPM 2.0, with optional TXT support
- NVIDIA HPC SDK (with OpenACC support) for GPUs
- Intel compilers, libraries and tools
Supported for Life
Our technicians and sales staff consistently ensure that your entire experience with Microway is handled promptly, creatively, and professionally.
Telephone support is available for the lifetime of your server(s) by Microway’s experienced technicians. After the initial warranty period, hardware warranties are offered on an annual basis. Out-of-warranty repairs are available on a time & materials basis.
System Price: $118,567 to $211,197
Each Microway system is customized to your requirements. Final pricing depends upon configuration and any applicable educational or government discounts.
Call a Microway Sales Engineer for Assistance : 508.746.7341 or
Click Here to Request More Information.
Benefits of our 8-GPU Server with NVLink
As applications scale across multiple GPUs, it is critical that communications between GPUs be efficient. NVIDIA’s NVLink interconnect provides unprecedented performance compared to PCI-Express connectivity. As shown in the diagram below, all eight GPUs are connected by NVIDIA NVSwitches with 600GB/sec of bandwidth for all-to-all communication. Connections are also provided for full-speed GPU-Direct RDMA transfers as part of a GPU-accelerated HPC cluster.