DDR4 RDIMM and LRDIMM Performance Comparison

Recently, while carrying out memory testing in our integration lab, Lead Systems Integrator, Rick Warner,  was able to clearly identify when it is appropriate to choose load-reduced DIMMs (LRDIMM) and when it is appropriate to choose registered DIMMs (RDIMM) for servers running large amounts of DDR4 RAM (i.e., 256 Gigabytes and greater). The critical factors to consider are latency, speed, and capacity, along with what your computing objectives are with respect to them.

Misconceptions on Load Reduced DIMM Performance

Load-reduced DIMMs were built so that high-speed memory controllers in CPUs could drive larger quantities of memory. Thus, it’s often assumed that LRDIMMs will offer the best performance for memory-dense servers. This impression is strengthened by the fact that Intel’s guide for DDR4 memory population shows LRDIMMs running at a higher frequency than RDIMMs (e.g., 2133MHz vs 1866MHz). However, as we’ll show below, there are greater factors at play.

Continue reading

Posted in Benchmarking, Hardware | Tagged , | Leave a comment

Intel Xeon E5-4600v3 “Haswell” 4-socket CPU Review

Intel has launched new 4-socket Xeon E5-4600v3 CPUs. They are the perfect choice for “just beyond dual socket” system scaling. Leverage them for larger memory capacity, faster memory bandwidth, and higher core-count when you aren’t ready for a multi-system purchase.

Here are a few of the main technical improvements:

  • DDR4-2133 memory support, for increased memory bandwidth
  • Up to 18 cores per socket, faster QPI links up to 9.6GT/sec between sockets
  • Up to 48 DIMMs per server, for a maximum of 3TB memory
  • Haswell core microarchitecture with new instructions

Why pick a 4-socket Xeon E5-4600v3 CPU over a 2 socket solution?

Continue reading

Posted in Hardware | Tagged , , | Leave a comment

Common PCI-Express Myths for GPU Computing Users

At Microway we design a lot of GPU computing systems. One of the strengths of GPU-compute is the flexibility PCI-Express bus. Assuming the server has appropriate power and thermals, it enables us to attach GPUs with no special interface modifications. We can even swap to new GPUs under many circumstances. However, we encounter a lot of misinformation about PCI-Express and GPUs. Here are a number of myths about PCI-E:

1. PCI-Express is controlled through the chipset

No longer in modern Intel CPU-based platforms. Beginning with the Sandy Bridge CPU architecture in 2012 (Xeon E5 series CPUs, Xeon E3 series CPUs, Core i7-2xxx and newer) Intel integrated the PCI-Express controller into the the CPU die itself. Bringing PCI-Express onto the CPU die came with a substantial latency benefit. This was a major change in platform design, and Intel coupled it with the addition of PCI-Express Gen3 support.

Continue reading

Posted in Benchmarking, Hardware, Pitfalls, Test Drive | Leave a comment

Introduction to RAID for HPC Customers

There is a lot of material available on RAID, describing the technologies, the options, and the pitfalls.  However, there isn’t a great deal on RAID from an HPC perspective.  We’d like to provide an introduction to RAID, clear up a few misconceptions, share with you some best practices, and explain what sort of configurations we recommend for different use cases.

What is RAID?

Originally known as Redundant Array of Inexpensive Disks, the acronym is now more commonly considered to stand for Redundant Array of Independent Disks.  The main benefits to RAID are improved disk read/write performance, increased redundancy, and the ability to increase logical volume sizes.

RAID is able to perform these functions primarily through striping, mirroring, and parity.  Striping is when files are broken down into segments, which are then placed on different drives.  Because the files are spread across multiple drives that are running in parallel, performance is improved.  Mirroring is when data is duplicated on the fly across drives.  Parity within the context of RAID refers to when data redundancy is distributed across all drives so that when one or more (depending on the RAID level) drives fail, the data can be reconstructed from the remaining drives. Continue reading

Posted in Hardware, Pitfalls | Tagged , , | Leave a comment

Introducing the NVIDIA Tesla K80 GPU Accelerator (Kepler GK210)

NVIDIA has once again raised the bar on GPU computing with the release of the new Tesla K80 GPU accelerator.  With up to 8.74 TFLOPS of single-precision performance with GPU Boost, the Tesla K80 has massive capability and leading density.

NVIDIA Tesla K80

Here are the important performance specifications:

  • Two GK210 chips on a single PCB
  • 4992 total SMX CUDA cores: 2496 on each chip!
  • Total of 24GB GDDR5 memory; aggregate memory bandwidth of 480GB/sec
  • 5.6 TFLOPS single precision, 1.87 TFLOPS double precision
  • 8.74 TFLOPS single precision, 2.91 TFLOPS double precision with GPU Boost
  • 300W TDP

To achieve this performance, Tesla K80 is really two GPUs in one. This Tesla K80 block diagram illustrates how each GK210 GPU has its own dedicated memory and how they communicate at x16 speeds with the PCIe bus using a PCIe switch:

Tesla K80 block diagram

Continue reading

Posted in Hardware, Test Drive | Tagged , , , | Leave a comment

How to Benchmark GROMACS GPU Acceleration on HPC Clusters

Cropped shot of a GROMACS adh simulation (visualized with VMD)

We know that many of our readers are interested in seeing how molecular dynamics applications perform with GPUs, so we are continuing to highlight various packages. This time we will be looking at GROMACS, a well-established and free-to-use (under GNU GPL) application.  GROMACS is a popular choice for scientists interested in simulating molecular interaction. With NVIDIA Tesla K40 GPUs, it’s common to see 2X and 3X speedups compared to the latest multi-core CPUs.

Continue reading

Posted in Benchmarking, Software, Test Drive | Tagged , , , | Leave a comment

Benchmark MATLAB GPU Acceleration on NVIDIA Tesla K40 GPUs

MATLAB solving a second order wave equation on Tesla GPUs

MATLAB is a well-known and widely-used application – and for good reason. It functions as a powerful, yet easy-to-use, platform for technical computing. With support for a variety of parallel execution methods, MATLAB also performs well. Support for running MATLAB on GPUs has been built-in for a couple years, with better support in each release. If you haven’t tried yet, take this opportunity to test MATLAB performance on GPUs. Microway’s GPU Test Drive makes the process quick and easy. As we’ll show in this post, you can expect to see 3X to 6X performance increases for many tasks (with 30X to 60X speedups on select workloads).

Continue reading

Posted in Benchmarking, Software, Test Drive | Tagged , , , | Leave a comment

Running GPU Benchmarks of HOOMD-blue on a Tesla K40 GPU-Accelerated Cluster

Cropped shot of a HOOMD-blue micellar crystals simulation (visualized with VMD)

This short tutorial explains the usage of the GPU-accelerated HOOMD-blue particle simulation toolkit on our GPU-accelerated HPC cluster. Microway allows you to quickly test your codes on the latest high-performance systems – you are free to upload and run your own software, although we also provide a variety of pre-compiled applications with built-in GPU acceleration. Our GPU Test Drive Cluster is a useful resource for benchmarking the faster performance which can be achieved with NVIDIA Tesla GPUs.

This post demonstrate HOOMD-blue, which comes out of the Glotzer group at the University of Michigan. HOOMD blue supports a wide variety of integrators and potentials, as well as the capability to scale runs up to thousands of GPU compute processors. We’ll demonstrate one server with dual NVIDIA® Tesla®  K40 GPUs delivering speedups over 13X!

Continue reading

Posted in Benchmarking, Software, Test Drive | Tagged , , , , , | Leave a comment

Benchmarking NAMD on a GPU-Accelerated HPC Cluster with NVIDIA Tesla K40

Cropped shot of a NAMD stmv simulation (visualized with VMD)

This is a tutorial on the usage of GPU-accelerated NAMD for molecular dynamics simulations. We make it simple to test your codes on the latest high-performance systems – you are free to use your own applications on our cluster and we also provide a variety of pre-installed applications with built-in GPU support. Our GPU Test Drive Cluster acts as a useful resource for demonstrating the increased application performance which can be achieved with NVIDIA Tesla GPUs.

This post describes the scalable molecular dynamics software NAMD, which comes out of the Theoretical and Computational Biophysics Group at the University of Illinois Urbana-Champaign. NAMD supports a variety of operational modes, including GPU-accelerated runs across large numbers of compute nodes. We’ll demonstrate how a single server with NVIDIA® Tesla®  K40 GPUs can deliver speedups over 4X!

Continue reading

Posted in Benchmarking, Software, Test Drive | Tagged , , , , , | Leave a comment

Running AMBER on a GPU Cluster

Cropped shot of an AMBER nucleosome simulation (visualized with VMD)

Welcome to our tutorial on GPU-accelerated AMBER! We make it easy to benchmark your applications and problem sets on the latest hardware. Our GPU Test Drive Cluster provides developers, scientists, academics, and anyone else interested in GPU computing with the opportunity to test their code. While Test Drive users are given free reign to use their own applications on the cluster, Microway also provides a variety of pre-installed GPU accelerated applications.

In this post, we will look at the molecular dynamics package AMBER. Collaboratively developed by professors at a variety of university labs, the latest versions of AMBER natively support GPU acceleration. We’ll demonstrate how NVIDIA® Tesla®  K40 GPUs can deliver a speedup of up to 86X!

Continue reading

Posted in Benchmarking, Software, Test Drive | Tagged , , , , | Leave a comment