IBM Power Systems server S822LC for HPC

OpenPOWER GPU Server with NVIDIA Tesla P100 NVLink GPUs

OpenPOWER server with CPU-to-GPU and GPU-to-GPU NVLink connectivity

Microway’s OpenPOWER GPU server with NVLink provides two IBM POWER8 CPUs and four NVIDIA Tesla P100 GPUs within a 2U rackmount server platform. These systems provide an intelligently-balanced platform for next-generation high performance computing clusters. These systems are built upon the IBM Power Systems Server S822LC for HPC.

With the introduction of NVIDIA’s new “Pascal” Unified Memory capabilities, GPU-accelerated applications are able to access all system memory and the memory of all GPUs combined. With four Tesla P100 GPUs, a total of 64GB of high-bandwidth GPU memory is available (in addition to up to 1TB of system memory). NVLink connections (between each pair of GPUs and between the CPUs and the GPUs) allow for data transfers up to five times faster than PCI-Express. This design enables new applications which leverage large quantities of memory.

Image of the OpenPOWER foundation's logoEach OpenPOWER GPU server arrives fully integrated with the operating system of your choice (RHEL, CentOS, or Ubuntu Linux), along with the NVIDIA CUDA 8.0 toolkit. Your choice of libraries, tools, and software applications are available for an additional fee. All systems pass rigorous stress tests prior to shipment.

Need more compute power? Microway provides turn-key OpenPOWER clusters including an InfiniBand interconnect, full GPU-Direct RDMA capability, and high-performance clustered storage.

NVIDIA Tesla Logo

  • Two or four top-performance NVIDIA Tesla P100 GPUs with NVLink
  • NVLink connectivity for data-intensive and multi-GPU applications
  • Support for high-speed InfiniBand fabrics and ethernet connectivity
  • NVIDIA CUDA Toolkit installed and configured – ready to run GPU jobs!
  • Up to 20 IBM POWER processor cores (each supporting 8 threads)
  • Up to 1TB system memory with a total throughput of 230GB/s
  • (2) IBM POWER8+NVLink CPUs
    (with a total of 16 cores at 3.259 GHz or 20 cores at 2.860 GHz)
  • Up to 1TB of high-performance DDR4 ECC/Registered Memory (32 slots)
  • Up to (2) Hot-Swap 2.5” 12Gbps drives
  • Four SXM2 slots for NVIDIA Tesla P100 GPUs,
    each with NVLink 80GB/s connectivity (160GB/s bidirectional)
  • Two PCI-Express 3.0 x16 low-profile slots with CAPI support
  • One PCI-Express 3.0 x8 low-profile slot with CAPI support
  • Removable Storage: one front and one rear USB 3.0 port
  • Options for Gigabit Ethernet, 10GbE, and 40GbE ethernet ports
  • IPMI 2.0 with Dedicated LAN Support
  • Dual, Redundant 1300W Power Supplies
  • 100G ConnectX-5 / ConnectX-4 InfiniBand
  • High-speed NVMe flash storage
  • PGI Accelerator Compilers (with OpenACC support) for OpenPOWER
  • IBM XL compilers and tools

Supported for Life

Our technicians and sales staff consistently ensure that your entire experience with Microway is handled promptly, creatively, and professionally.

Telephone support is available for the lifetime of your server(s) by Microway’s experienced technicians. After the initial warranty period, hardware warranties are offered on an annual basis. Out-of-warranty repairs are available on a time & materials basis.

System Price: $35,000 to $75,000

Each Microway system is customized to your requirements. Final pricing depends upon configuration and any applicable educational or government discounts.

Call a Microway Sales Engineer for Assistance : 508.746.7341 or
Click Here to Request More Information.

Benefits of our OpenPOWER GPU Server with NVLink

A powerful compute accelerator must have fast access to memory. These OpenPOWER systems provide NVLink connections between the CPUs and the GPUs, which allows for unprecedented communication speeds between the CPUs and GPUs. These links also enable rapid data transfer between system memory and GPU memory, which allows GPU applications to operate directly on system memory (in addition to leveraging the high-bandwidth GPU memory). For applications which support execution across multiple GPUs, NVLink allows rapid communication between GPUs. NVIDIA’s NVLink interconnect provides unprecedented performance compared to previous-generation PCI-Express connectivity. Connections are also provided for full-speed GPU-Direct RDMA transfers as part of a GPU-accelerated HPC cluster.

Block diagram drawing of the Microway OpenPOWER GPU Server with NVLink GPUs

Block diagram of the 2U Microway OpenPOWER GPU server with Tesla P100 NVLink GPUs

Photo of the IBM Power Systems S822LC for HPC with NVLink

Eliot Eshelman

About Eliot Eshelman

My interests span from astrophysics to bacteriophages; high-performance computers to small spherical magnets. I've been an avid Linux geek (with a focus on HPC) for more than a decade. I work as Microway's Vice President of Strategic Accounts and HPC Initiatives.
Bookmark the permalink.

Comments are closed.