NVIDIA DGX Deployment for AI Centers of Excellence

NVIDIA DGX SuperPOD infrastructure (compute, networking, and storage) and software enables you to deploy swiftly, smoothly, and confidently a turnkey AI Datacenter built upon the platform that is the gold standard for AI.


DGX Platform

DGX SuperPOD 32 Node Scalable Unit

  • Record Breaking Scalable Unit: Scale-out AI performance with an NVIDIA-validated architecture. The design is the basis of NVIDIA’s World Record Breaking, EOS Deployment
  • Scales up to Massive Deployments: deploy multiple Scalable Units for an even larger AI Center of Excellence
  • 100 PFLOPS of AI Performance
  • Total of 20TB of GPU memory
  • NVIDIA Quantum-2 400Gb NDR InfiniBand, optional NVLink Fabric
  • Full parallel filesystem integrated by Microway from DDN, Weka, VAST, or NetApp
  • Managed by NVIDIA Base Command workload management software
  • NVIDIA NGC Containers with optimized performance

Why deploy DGX SuperPOD?

Tested and Proven

DGX SuperPOD is a predictable solution that meets the performance and reliability needs of enterprises. NVIDIA tests DGX SuperPOD extensively, pushing it to the limits with enterprise AI workloads,

Simplified Administration With NVIDIA Base Command Manager

Seamlessly automate deployments, software provisioning, on-going monitoring, and health checks for DGX SuperPOD with NVIDIA Base Command Manager.

Dedicated Expertise and Services

DGX SuperPOD includes dedicated expertise that spans installation to infrastructure management to scaling workloads to streamlining production AI.

Software Always Improves

NVIDIA DGX solutions automatically receive performance improvements from NVIDIA. Powered by NVIDIA Base Command, DGX SuperPOD also improves its workload orchestration software.

DGX SuperPOD Deployments eBook

Learn about Generative AI in Practice with examples of enterprise and other Center of Excellence Deployments of DGX SuperPOD.