Event Details
Location:
San Jose, CA
Dates:
March 16-19, 2026
Booth:
111
NVIDIA GTC is the premier global AI conference, where developers, researchers, and business leaders come together to explore the next wave of AI innovation. From physical AI and AI factories to agentic AI and inference, GTC 2026 will showcase the breakthroughs shaping every industry. GTC advances global awareness of GPU computing, AI, graphics, HPC, embedded, and cloud computing. The event features technical sessions, tutorials, panel discussions, meet the experts forums, moderated roundtables, and more.
Meet with Microway at GTC
Standard 1:1 Meeting
Schedule a 1:1 30 minute meeting with Microway. We’ll discuss your next project.
Discuss NVIDIA DGX or NVIDIA AI Solutions
Want deeper assistance with NVIDIA DGX or NVIDIA AI solutions? Our Solution Architects can sit down with you for a consultation. Our team can cover NVIDIA DGX, DGX BasePOD or SuperPOD, NVIDIA HGX, NVIDIA AI Enterprise, or beyond.
Keynote Information
Jensen Huang- NVIDIA CEO
GTC 2026 Keynote
NVIDIA CEO Jensen Huang will share the latest innovations in AI and GPU computing in his annual keynote. Watch at 11am-1pm PT on March 16 (replay also available thereafter). No registration required for virtual attendees.
Key GTC Sessions
Build Gigascale AI Factories With Next-Generation Rack-Scale Systems [S81793]
Charlie Boyle | VP, DGX Systems | NVIDIA
Aditya Kumarakrishnan | Technical Fellow | Walmart Global Tech
Microway team note- we predict this session may feature information on next-gen NVIDIA DGX Systems with NVIDIA Vera Rubin architecture. Attend and find out! The next generation of AI models is driving unprecedented compute consumption, exposing the limitations of traditional system architectures. Enterprises are now evolving data center designs to support new rack-scale architectures for modern inference, as well as large-scale training optimized for these rapidly escalating demands. Join this session to learn how NVIDIA is systemizing the new NVIDIA Vera Rubin architecture to help customers rapidly deploy gigascale AI factories.
CUDA: New Features and Beyond [S81859]
Stephen Jones | CUDA Architect | NVIDIA
The CUDA platform is the foundation of the GPU computing ecosystem. Every application and framework that uses the GPU does so through CUDA’s libraries, compilers, runtimes and language—which means CUDA is growing as fast as its ecosystem is evolving. Presented by one of the architects of CUDA, at this engineering-focused talk you will learn about all that’s new and what’s coming next for both CUDA and GPU computing as a whole.
Microway team note- this is where you’ll learn about new features for CUDA in 2026 and how CUDA will support new architectures.
From Cluster to Factory: Scale AI Infrastructure Operations With Software-Defined Intelligence [S81796]
Premal Savla | Sr. Director of Product Management, DGX Systems and Solutions | NVIDIA
Jon Klinginsmith | Executive Director of Cloud Engineering and High Performance Computing | Lilly USA, LLC
Be among the first to hear how software-defined intelligence is aiding enterprises to scale AI operations. Hear directly from an AI innovator as they reveal key learnings from deploying NVIDIA DGX, sharing firsthand insights into architecting and deploying high-performance AI infrastructure and orchestrating complex inference and training workflows. In this session, we’ll also discuss the critical role of software for NVIDIA Rubin and how to simplify complex liquid-cooled deployments at scale, boost site reliability engineering (SRE) productivity, and enable enhanced AI-powered anomaly detection and remediation across the entire AI factory. Microway team note- we expect much information about NVIDIA Mission Control in practice from this session.
Inside NVIDIA DGX AI Factory: Accelerating Networking for AI Across Cloud, Core, and Edge [S81856]
Alexander Petrovskiy | Arts Yang | Andrew Forgue | NVIDIA
AI infrastructure must deliver predictable, high‑throughput pipelines for large‑scale training and inference while enforcing strict security, data sovereignty, and multi‑tenant isolation across cloud and on‑premises environments. These demands can overwhelm traditional host‑based networking and security, reducing utilization and fragmenting policies. This session will bring together NVIDIA technologists building NVIDIA’s own DGX Cloud platform powered by NVIDIA Networking and show how they are extending the learnings to sovereign DGX AI factory solutions. Understand design patterns such as zero‑trust for hardened multi‑tenant clusters, accelerated Kubernetes networking, intelligent data platforms, and security services that enable consistent, high‑performance AI factories everywhere.
The Genesis of Accelerated Quantum Supercomputing: Unifying AI and Quantum [S81804]
Sam Stanwyck | Group Product Manager | NVIDIA
The path to a fault-tolerant quantum computer (FTQC) is not solely a physics challenge; it is an AI and supercomputing challenge. This talk outlines the singular mission required to deliver scientifically useful accelerated quantum supercomputers by 2028: the complete convergence of AI with quantum hardware, systems, and algorithms. We will also present the blueprint for a collaborative and open quantum ecosystem. By leveraging open standards in software, CUDA-Q, interconnects, and NVQLink, we are creating a unified, hybrid infrastructure where national labs, startups, and industry partners can collaborate.
Deep Dive on Gigawatt AI Factories [S81848]
Julie Bernauer | Sr. Director, Applied Systems Engineering | NVIDIA
Adam DeConinck | Director, Applied Systems Engineering | NVIDIA
As at-scale AI workloads for training and inference make use of gigawatt-scale multi-GPU systems, massive data transfer, and memory access capabilities of Vera Rubin NVL72 systems in AI factories, we will show how these large systems get architected and deployed in the data center, leading to a robust reference architecture embraced globally for its speed of deployment and performance. This session will showcase internal deployments and the tools and software stack developed and leveraged on these platforms. You’ll also have the opportunity to have deep dive views on how multi-node NVLink, system interconnects, and storage, as well as the latest liquid-cooled designs, are shaping the future of AI infrastructure.
The Future of AI Storage With NVIDIA: Vera, Rubin, and Bluefield-4 [S82263]
Justin Boitano | VP of Enterprise and Edge Computing | NVIDIA
The fuel for AI factories is data. And enterprise data management systems are evolving to meet the large-scale data needs of AI factories. Join us to learn how enterprise storage systems are transforming into knowledge and intelligence management systems. We will discuss key technology innovations from NVIDIA that are enabling and accelerating this transition.
Best Practices for Scaling Inference [S81519]
Jalaj Thanaki | Sr. Solutions Architect | NVIDIA
Anish Mukherjee | Solutions Architect | NVIDIA
As organizations scale toward population-level AI adoption, running inference reliably, efficiently, and cost-effectively becomes essential. This session covers best practices for scaling inference across platform, model, and application layers, using real-world insights and NVIDIA technologies like DGXC-Lepton and NVCF. We’ll address challenges in GPU capacity, autoscaling, model optimization with NVIDIA S/W stack, concurrency, architecture, and observability. Learn how to use proven, field-tested strategies for scaling current AI workloads or launching new ones at scale.
The Era of GPU Data Processing: From SQL to Search and Back Again [S81769]
Todd Mostak | Sr. Director of Engineering | NVIDIA
Joshua Patterson | VP, Solutions Architecture – Accelerating Data Processing | NVIDIA
This session delivers a technical state of the union on GPU-accelerated data processing across SQL/DataFrames, vector search, ML, and decision optimization. Learn how GPU-native engines enable interactive analytics on massive lakehouse-scale datasets, real-time semantic and vector search over billions of embeddings, and makes the hardest ML and decision science workloads tractable, cost-efficient, and energy-efficient. The talk highlights the implications for high-impact scientific and enterprise computing, then looks ahead to what’s in flight for 2026 and beyond, outlining concrete architectural patterns and practical guidance for building the next generation of GPU-accelerated data platforms and using them in your day-to-day work.
Security for the AI-First Enterprise [S81491]
David Reber | CSO | NVIDIA
As AI transforms every layer of modern computing, security must evolve in parallel—shifting from a separate function to an embedded capability woven into every stage of innovation. Today’s CISOs and security leaders are moving beyond reactive defense, redefining their roles as proactive business enablers who accelerate safe adoption of AI, cloud, and emerging technologies. This talk explores how that shift is reshaping security architectures, operating models, and organizational influence. Finally, it highlights why meaningful collaboration across industry, academia, and government is no longer optional. Addressing global cyber threats at AI scale demands shared intelligence, coordinated standards, and partnerships that blend cutting-edge research with real-world deployment. Together, these forces define the future of secure innovation.
Onsite Training Lab: Deploy State-of-the-Art Gen AI on Your RTX PRO Workstation [DLIT81958]
Julius Gregor Tischbein | Sr. Developer Technology Software Engineer | NVIDIA
Maximilian Mueller | Sr. Developer Technology Software Engineer | NVIDIA
This session guides you through the end-to-end deployment of generative image models on NVIDIA RTX PRO workstations by leveraging ONNX Runtime and TensorRT-RTX. We’ll demonstrate how to ship a multi-stage model pipeline across a wide set of hardware without sacrificing performance. This workflow will be powered by classic graphics APIs, ONNX Runtime, and the TensorRT-RTX Execution Provider. You’ll leave with the skills to convert raw models into high-performance, offline-capable applications that run seamlessly across diverse hardware architectures.
Leverage AI and Accelerated Computing for Digital Biology: Parabricks, BioNeMo and Clara Open Models [CWES81555]
Connect with the Experts Panel (NVIDIA)
Join NVIDIA experts to explore the field of digital biology. Discuss the latest advancements in accelerated computing and foundation models for drug discovery and genomics — from bioinformatics tooling to protein structure prediction, molecular dynamics to molecular generation, and much more. Learn how NVIDIA BioNeMo, Parabricks, and RAPIDS-singlecell, together with NVIDIA NIM, empower developers, researchers, and enterprises to quickly create generative AI solutions across chemistry, biology, and genomics. Engage directly with our product managers, developer relations, and solution architects to address your challenges and answer your questions.




