Build an AI Factory With Microway

Deploy exceptional infrastructure for AI Inference. Microway offers inference solutions spanning from single systems (edge or datacenter) up to multi-rack clusters.

Custom Microway

NVIDIA HGX B200 and HGX B300 AI Factory Racks

  • Complete Deployment with compute, storage, networking, AI software so you have AI productivity on day 1
  • 32+ NVIDIA B200 or B300 GPUs
  • Single or multi Rack configuration
  • Delivered by Microway with NVIDIA AI Enterprise
  • Runs NVIDIA NIM microservices, NVIDIA Dynamo and NVIDIA NeMo
  • Comes with onsite installation and runs jobs the day we leave

Scale Out Enterprise AI Factories

  • 256 NVIDIA B200 GPUs or 576 NVIDIA B300 GPUs
  • Recommended architectures from 32 nodes t0 1000 nodes
  • Available with air or liquid cooling
  • Delivered by Microway with NVIDIA Mission Control and NVIDIA AI Enterprise
  • Runs NVIDIA NIM microservices, NVIDIA Dynamo and NVIDIA NeMo
  • Custom-tailored onsite installation and white-glove bringup program

Leadership Scale AI Factory

  • NVIDIA GB200 NVL72, the platform for trillion parameter AI
  • Built with the NVIDIA Grace Blackwell Superchip Platform
  • Large unified memory configuration, interconnected with NVIDIA NVLink
  • Delivered by Microway with NVIDIA Mission Control and NVIDIA AI Enterprise
  • Runs NVIDIA NIM microservices, NVIDIA Dynamo and NVIDIA NeMo
  • Custom-tailored onsite installation and white-glove bringup program

Powered by the NVIDIA DGX platform

Production AI Factory

  • NVIDIA DGX BasePOD
  • 32 NVIDIA H200 or B200 GPUs
  • Fixed NVIDIA configuration/reference architecture
  • Delivered by Microway with NVIDIA DGX Software Bundle, which includes NVIDIA Base Command and NVIDIA AI Enterprise
  • Runs NVIDIA NIM microservices, NVIDIA Dynamo and NVIDIA NeMo
  • Comes with onsite installation and runs jobs the day we leave

Scale Out Enterprise AI Factories

  • NVIDIA DGX SuperPOD – The World’s First Turnkey AI Data Center
  • 256+ NVIDIA B200 GPUs or 576 NVIDIA B300 GPUs
  • Fixed NVIDIA configurations (multiples of 32 or 72 node Scalable Units)
  • Delivered by Microway with NVIDIA Mission Control and NVIDIA AI Enterprise
  • Runs NVIDIA NIM microservices, NVIDIA Dynamo and NVIDIA NeMo
  • Custom-tailored onsite installation and white-glove bringup program

Leadership Scale AI Factory

  • NVIDIA DGX GB200 NVL72, the platform for trillion parameter AI
  • Built with the NVIDIA Grace Blackwell Superchip Platform
  • Large unified memory configuration, interconnected with NVIDIA NVLink
  • Delivered by Microway with NVIDIA Mission Control and NVIDIA AI Enterprise
  • Runs NVIDIA NIM microservices, NVIDIA Dynamo and NVIDIA NeMo
  • Custom-tailored onsite installation and white-glove bringup program

Why Select Microway for AI Factories?


Runs Code the Day it Leaves Microway

Solutions integrated by Microway run applications immediately after delivery. That’s how we test them: with real jobs.

Unmatched Burn-In Testing

We burn-in test with applications that stress every GPU memory block.

NVIDIA AI Software Integration

Microway experts can install and integrate NVIDIA AI Enterprise, any NGC container, NVIDIA Dynamo, NVIDIA NeMo™, and NVIDIA NIM™ microservices.

Architected by Experts and Backed by Microway Technical Support

Our sales engineers have extensive expertise in architecting AI solutions and our technical support team used to integrate GPU clusters. Never deal with a Tier 1 OEM “generalist” again.

Datasheet: NVIDIA DGX SuperPOD for AI Factories

Full stack AI infrastructure for today’s enterprise