NVIDIA Datacenter GPU Solutions from Microway
NVIDIA GPUs are the leading acceleration platform HPC and AI. They offer overwhelming speedups compared to CPU only platforms on thousands of applications, simple directive-based programming, and the opportunity for custom code. As an NVIDIA NPN Elite partner, Microway can custom architect a bleeding-edge GPU solution for your application or code.
NVIDIA A100 GPUs (formerly Tesla GPUs)

The latest NVIDIA A100 “Ampere” GPUs provide the utmost in GPU acceleration for your deployment and many advanced features
- Multi Instance GPU (MIG)
Allows each A100 GPU to run seven separate & isolated applications or user sessions - Improved compute performance for HPC
Up to 9.7 TFLOPS FP64 double-precision floating-point performance (19.5 TFLOPS via FP64 Tensor Cores) - Improved performance for Deep Learning
Speedups of 3x~20x for neural network training and 7x~20x for inference (vs Tesla V100) and new TF32 instructions - Bigger & Faster GPU memory
40GB (EOL) or 80GB of high-bandwidth memory operating at 1.6TB/s - Faster connectivity
3rd-generation NVLink provides 10x~20x faster transfers than PCI-Express
NVIDIA A30 GPUs
NVIDIA A30 “Ampere” GPUs offer versatile compute acceleration for mainstream enterprise GPU servers
- Amazing price-performance for HPC compute
Up to 5.2 TFLOPS FP64 double-precision floating-point performance (10.3 TFLOPS via FP64 Tensor Cores) - Strong AI Training & Inference Performance
approximately ~50% of the FP16 Tensor FLOPS of an NVIDIA A100 and support for TF32 instructions - Large and fast GPU memory spaces
24GB of high-bandwidth memory with 933GB/s of memory bandwidth - Multi Instance GPU (MIG)
Allows each A30 GPU to run four separate & isolated applications or user sessions - Fast connectivity
PCI-Express Gen 4.0 interface to host and 200GB/sec 3rd Gen NVLink Interface to neighboring GPUs

Why NVIDIA Datacenter GPUs?
NVIDIA Datacenter GPUs have unique capabilities not present in consumer GPUs and are the ideal choice for professionals deploying clusters, servers, or workstations. Unique features to NVIDIA’s datacenter GPUs include:
Full NVLink Capability, Up to 600GB/sec
Only NVIDIA Datacenter GPUs deploy the most robust implementation of NVIDIA NVLink for the highest bandwidth data transfers. At up to 600GB/sec per GPU, your data moves freely throughout the system and nearly 20X the rate of PCI-E x16 3.0 GPUs.
Unique Instructions for AI Training, AI Inference, & HPC
Datacenter GPUs support the latest TF32, BFLOAT16, FP64 Tensor Core, and Int8 instructions that dramatically improve application performance.
Unmatched Memory Capacity, up to 80GB per GPU
Support your largest datasets with up to 80GB of GPU memory on NVIDIA A100, far greater capacity than available on consumer offerings.
Full GPU Direct Capability
Only datacenter GPUs support the complete array of GPU Direct P2P, RDMA, and Storage features. These critical functions remove unnecessary copies and dramatically improve data flow.
Explosive Memory Bandwidth up to 1.5TB/s and ECC
NVIDIA Datacenter GPUs uniquely feature HBM2 GPU memory with up to 1.5TB/sec of bandwidth and full ECC protection.
Superior Monitoring & Management
Full GPU integration with the host system’s monitoring and management capabilities such as IPMI. Administrators can manage datacenter GPUs with their widely-used cluster/grid management tools.