Chassis Photo of the NVIDIA DGX H100


Chassis Photo of the NVIDIA DGX H100

The Gold Standard for AI Infrastructure

DGX H100 is the AI powerhouse that’s accelerated by the groundbreaking performance of the NVIDIA H100 Tensor Core GPU.

The system is designed to maximize AI throughput, providing enterprises with a highly refined, systemized, and scalable platform to help them achieve breakthroughs in natural language processing, recommender systems, data analytics, and much more. DGX H100 delivers the performance needed for enterprises to solve the biggest challenges with AI.



  • Designed to break through barriers in scale: NVIDIA DGX H100 features 6X more performance, 2X faster networking, and high-speed scalability when deployed at AI datacenter scale as part of NVIDIA DGX SuperPOD™
  • Architected for Your AI Center of Excellence: NVIDIA DGX is a fully optimized hardware and software platform that includes support for the new range of NVIDIA AI software solutions, ready to run containers, a rich ecosystem of third-party support, and access to expert advice from NVIDIA professional services
  • Leadership-Class Infrastructure on Your Terms: Available for on premises deployment with direct purchase, for colocation at DGX Ready Datacenter Providers after acquisition, rented via DGX Foundry, and for purchase via financing (as OPEX). All via Microway.
  • 8x NVIDIA H100 GPUs with a total of 640GB HBM3 GPU memory
  • NVIDIA Hopper™ GPU architecture: Transformer Engine for Supercharged AI Performance, 2nd Generation Multi-Instance GPU, Confidential Computing, and new DPX Instructions
  • Arrives with Microway Deployment services: Microway experts will integrate the DGX software stack on your DGX H100 system or NVIDIA DGX SuperPOD™


  • 8 NVIDIA H100 GPUs, each with 80GB of GPU memory
  • Up to 16 PFLOPS of AI Training performance (BFLOAT16 or FP16 Tensor Core Compute)
  • Total of 640GB of HBM3 GPU memory with 3TB/sec of GPU memory bandwidth
  • 4th Generation NVIDIA NVLink® Technology (900GB/s per NVIDIA H100 GPU): Each GPU now supports 18 connections for up to 900GB/sec of bandwidth
  • 4 NVIDIA NVSwitches with 7.2 terabytes per second of bidirectional GPU-to-GPU bandwidth, 1.5X the previous generation
  • Two x86 CPUs and 2TB of system memory
  • Two NVIDIA Bluefield®-3 DPUs for programmable in-network acceleration and datacenter management
  • Eight NVIDIA ConnectX®-7 NDR 400Gb InfiniBand/Ethernet Adapters for fabric communication and scaling via GPUDirect® RDMA. 1 terabyte per second of peak bidirectional network bandwidth.
  • PCIe Gen 5 support providing the highest speeds possible at 128GB/s for all GPUs and fabric adapters
  • 30TB of NVMe storage
  • Redundant, Hot-Swap power supplies
  • Power Consumption: ~10.2kW at full load
  • DGX operating system based upon Ubuntu Linux, Red Hat-based stack also available

DGX H100 Services

Bundled Services

DGX H100 deliveries are bundled with Microway services including:

DGX Site Planning

A Microway Solutions Architect will provide remote consultation to you in planning for the DGX H100’s unique power and cooling requirements. This includes rack diagramming with airflow and power cabling notation as well as answering queries from facilities staff about support requirements of the new DGX H100 hardware.

Deployment Services

All DGX OS and container software will be installed, firmware upgraded to the latest versions, desired DGX-containers installed, and deep learning test jobs run. Customers may supply questions to our experts. In some cases, factory-trained Microway experts may travel to your datacenter.

Optional Services

Microway also offers optional DGX services including: container and/or job execution script creation, and partner-provided Deep Learning data preparation consultancy.

Container or Job Execution Script Creation

Creating an effective workflow is key to your success with any hardware resource. The unique container architecture of DGX systems mean proper container management and even job execution scripts are a necessity. Microway experts will assist you in creating: your default containers, scripts to orchestrate these containers for multiple users in your organization, and methods of dynamically allocating GPUs as required to containers. Experts will also help you plan profiles for the new Multi Instance GPU features.

Deep Learning Data Preparation

An overwhelming majority of the time in a deep learning project is spent on the preparation of data. At your option, Microway’s data-science consultant partners will engage with you to: to create a custom scope of work, determine the best means to prepare your data for deep learning, create the pre-processing algorithms, assist in the pre-process of the training data, and optionally determine effective means of measurement for the overall DL project. Additional services also available.


Complementary Options

  • Separate direct-attached high-speed flash data plane for smaller data sets
  • DDN Parallel storage solutions for large datasets (up to multi-petabyte), scale-out user-bases, or ultra-high bandwidth requirements

NVIDIA DGX H100 Part Numbers

PN to be releasedNVIDIA DGX H100 System for Commercial or Government institutions
PN to be releasedNVIDIA DGX H100 System for EDU (Educational institutions)


All NVIDIA DGX Systems are sold and delivered with a Standard DGX Hardware Warranty and 3 years of DGX Support services. These Support services can be renewed annually.

Continuing to renew your Support services ensures you receive the latest DGX software updates (including frameworks) and retain the DGX H100 system’s outstanding SLA commitments.


Contact us for pricing

Final pricing depends upon configuration and any applicable discounts, including education or NVIDIA Inception. Request a custom quotation to receive your applicable discounts.

NVIDIA requires all DGX purchases to include a support services contract. Ensure all quotes you receive include this mandatory DGX support.

Call a Microway Sales Engineer for Assistance : 508.746.7341 or
Click Here to Request a Quotation

Bookmark the permalink.

Comments are closed.