Dense GPU server with NVLink-connected NVIDIA Tesla V100 GPUs
Note: NVIDIA announced an End of Sale Program for NVIDIA Tesla V100 GPUs in Summer 2021. We may still be able to source you these GPUs with long lead times. However, unless you are expanding an existing Tesla V100 deployment, we recommend selecting an NVIDIA A100 or A30 GPU based system such as Navion 2U NVIDIA A100 GPU Server with NVLink or NumberSmasher 1U 4 GPU Server
NumberSmasher 1U Tesla GPU Server NVLink provides two CPUs and four GPUs within a compact 1U rackmount server platform. This system combines some of our highest compute density with leading GPU:GPU bandwidth.
Four NVIDIA® Tesla® V100 GPU are each connected via an 300GB/s NVLink interconnect, resulting in data transfers nearly 10X faster than PCI-Express x16 3.0. This design enables lightning fast GPU:GPU communication between all GPUs. The design packs in over 30 TFLOPS of double-precision performance and 500 TensorTFLOPs of Deep Learning performance.
Need more compute power? Microway provides turn-key NumberSmasher clusters including an InfiniBand interconnect and full GPU-Direct RDMA capability.
- Two or Four NVIDIA Tesla V100 GPUs with NVLink
- Additional slots for HDR/EDR InfiniBand, 10G/100G Ethernet, and high-speed storage
- NVIDIA CUDA Toolkit installed and configured – ready to run GPU jobs!
- Up to 56 processor cores
- Up to 3 TB system memory
- (2) Intel Xeon Xeon Scalable Processor “Cascade Lake-SP” CPUs (clock speed: up to 3.8 GHz)
- Six-Channel DDR4 2933 Mhz ECC/Registered Memory (12 slots)
- Up to (2) Hot-Swap 2.5” 12Gbps drives
- Four SXM2 slots for NVIDIA GPUs, each with 150GB/s connectivity (300GB/s bidirectional)
- Four PCI-Express 3.0 x16 full-height, half-length slots
- Removable Storage: rear USB 3.0 ports
- Two integrated Intel X540 10G Ethernet ports
- IPMI 2.0 with Dedicated LAN Support
- 2000W Redundant High-Efficiency Power Supplies
- ConnectX-6 200Gb HDR, ConnectX-5 100Gb EDR InfiniBand or 10G/25G/50G/100G Ethernet
- NVIDIA Quadro GPU for visualization
- High-speed NVMe flash storage
- TPM 2.0, with optional TXT support
- PGI Accelerator Compilers (with OpenACC support) for GPUs
- Intel compilers, libraries and tools
Supported for Life
Our technicians and sales staff consistently ensure that your entire experience with Microway is handled promptly, creatively, and professionally.
Telephone support is available for the lifetime of your server(s) by Microway’s experienced technicians. After the initial warranty period, hardware warranties are offered on an annual basis. Out-of-warranty repairs are available on a time & materials basis.
System Price: $24,000 to $75,000
Each Microway system is customized to your requirements. Final pricing depends upon configuration and any applicable educational or government discounts.
Call a Microway Sales Engineer for Assistance : 508.746.7341 or
Click Here to Request More Information.
Benefits of our GPU Server with NVLink
For applications which support execution across multiple GPUs, it is critical that communications between GPUs be efficient. NVIDIA’s NVLink interconnect provides unprecedented performance compared to previous-generation PCI-Express connectivity. As shown in the diagram below, there is full NVLink connectivity between every GPU in the server. Connections are also provided for full-speed GPU-Direct RDMA transfers as part of a GPU-accelerated HPC cluster.