Event Details

Location:

San Jose, CA

Dates:

March 17, 2025 – March 21, 2025

Booth:

2405

The NVIDIA GPU Technology Conference (GTC) advances global awareness of GPU computing, computer graphics, game development, mobile computing, and cloud computing. Through world-class education, including hundreds of hours of technical sessions, tutorials, panel discussions, and moderated roundtables, GTC brings together thought leaders from a wide range of fields.


Meet with Microway at GTC




Keynote Information


GTC 2025 Keynote

NVIDIA CEO Jensen Huang will share the latest innovations in AI and GPU computing in his annual keynote. Watch at 10am PT on March 18 (replay also available thereafter). No registration required for virtual attendees.

Key GTC Sessions


Next-generation at Scale Compute in the (DGX) Data Center [S73623]

We’ll highlight our latest designs for DGX clusters, including both compute and data center configurations showcasing an example with DGX SuperPOD. We’ll discuss the architectural and design principles underlying next-gen air- and liquid-cooled deployments. We’ll also examine the network fabric, interconnect solutions, and storage options that enable maximum performance, and how these components integrate with the software stack for generative AI applications.


The Next Frontier of AI Supercomputing: Efficiency With Unprecedented Capability [S72599]

Often a speaker who highlights new announcements, NVIDIA’s Ian Buck will highlight cutting-edge innovations that are revolutionizing high-performance computing while prioritizing energy conservation. Discover how these technologies are enabling industries and nations to achieve new insights. Join us to explore how supercomputing is not only advancing capabilities, but also paving the way for a greener, more efficient future in tech.


Build a Strategic Foundation for Enterprise Gen AI [S72357]

Often a speaker who highlights new announcements, NVIDIA’s Charlie Boyle will lead this talk and panel. We’ll delve into the critical elements required to operationalize Gen AI at scale — including proven infrastructure with a comprehensive software stack that enables high reliability and uptime, high visibility into cluster and job status, and simplified workload management to maximize developer productivity.


Onsite Only – Everything DGX! [CWE72415]

Wondering how to incorporate new technology into your current DGX SuperPOD? Trying to understand exactly how usage and management of a B200 DGX SuperPOD compares to one with DGX GB200? Trying to understand something that seems simple, but somehow isn’t, on an NVIDIA DGX? This is the session for you! Connect with solution architects, product managers, and other experts in an open discussion to help get your questions answered by the people who know.


Onsite Only – Deploy and Leverage NVIDIA Grace Blackwell and Grace Hopper in Your Data Center [CWE74386]

Learn about the unique architecture of these superchips and their versatile form factors that simplify data center deployment. We’ll address such questions as how the 900 GB/s NVLINK Chip-to-Chip interconnect enables the CPU and GPU to access all system-allocated memory, how you can seamlessly migrate your x86-based workloads to the Arm-based Grace CPU, and the performance gains you can expect to unlock for your applications.


CUDA: New Features and Beyond [S72383]

The CUDA platform is the foundation of the GPU computing ecosystem. At this engineering-focused talk by one of the architects of CUDA, you’ll learn what’s new and what’s coming next for both CUDA and GPU computing as a whole.


LLM Inference Performance and Optimization on NVIDIA GB200 NVL72 [S72503]

In this session, we will dive into the GB200 NVL72 architecture and programming model, highlighting its inference performance on state-of-the-art LLM models. We will also explore optimizations techniques that enable the 72 Blackwell GPUs to work together through the NVIDIA NVLINK, functioning as one giant GPU.


Wired for AI: Lessons from Networking 100K+ GPU AI Data Centers and Clouds [S71145]

In this session, leading AI cloud data centers will come together to share our experiences and insights from building and deploying colossal systems. We’ll delve into the unique challenges of networking at such a massive scale, and how we overcame them. Attendees will gain a deep understanding of the lessons learned in scaling infrastructure to support the next generation of AI, from the complexities of connecting thousands of GPUs to the innovations required to maintain performance and reliability at such an unprecedented scale.


Frontiers of AI and Computing: A Conversation With Yann LeCun and Bill Dally [S73208]

This talk brings together Yann LeCun, a pioneer in deep learning and the chief AI scientist at Meta, and Bill Dally, a leading computer architect and chief scientist at NVIDIA, to explore the future of AI models, hardware accelerators, and the evolving computational landscape. The discussion will cover: 1.The next breakthroughs in deep learning and AI architectures 2. How hardware innovation drives AI efficiency and scalability 3. Challenges in training large-scale models and real-time AI inference


Harnessing AI Agents for Enterprise Success: Insights From AI Experts [S72355]

In this session, moderated by NVIDIA’s vice president of enterprise AI and automation, AI experts will share their insights and experiences in developing and deploying generative AI and agentic solutions. They’ll discuss the opportunities AI agents bring, real-world enterprise use cases, strategies for building user trust, and the challenges of scaling these technologies across organizations.

Microway AI & GPU Solutions

NVIDIA DGX H200

The Gold Standard for AI Infrastructure

NVIDIA DGX B200 Key Visual

NVIDIA DGX B200

The Foundation of Your AI Center of Excellence

NVIDIA DGX BasePOD™

Rack-scale AI with multiple DGX systems & parallel storage

NVIDIA DGX SuperPOD™

The World’s First Turnkey AI Data Center

All GenAI Systems

Explore Microway’s Generative AI Systems and Clusters

NVIDIA GPU Servers

GPU Servers with up to 10 NVIDIA Datacenter GPUs

GPU Clusters

Custom AI & HPC clusters from 5-500 GPU nodes

You May Also Like

  • Events

    Building Intelligent Chatbots Using RAG

    On Demand Webinar To stay competitive in today’s rapidly evolving tech landscape, integrating AI into business operations isn’t a choice anymore – it’s a necessity. Large language models (LLMs) have gained immense popularity, but they face challenges: They’re prone to hallucinations and have trouble understanding domain-specific topics. Retrieval-augmented generation (RAG) is a groundbreaking leap in…