Category Archives: Hardware

NVIDIA Tesla P100 NVLink 16GB GPU Accelerator (Pascal GP100 SXM2) Up Close

The NVIDIA Tesla P100 NVLink GPUs are a big advancement. For the first time, the GPU is stepping outside the traditional “add in card” design. No longer tied to the fixed specifications of PCI-Express cards, NVIDIA’s engineers have designed a … Continue reading

NVIDIA Tesla P100 PCI-E 16GB GPU Accelerator (Pascal GP100) Up Close

NVIDIA’s new Tesla P100 PCI-E GPU is a big step up for HPC users, and for GPU users in general. Although other workloads have been leveraging the newer “Maxwell” architecture, HPC applications have been using “Kepler” GPUs for a couple … Continue reading

NVIDIA Tesla P100 Price Analysis

Now that NVIDIA has launched their new Pascal GPUs, the next question is “What is the Tesla P100 Price?” Although it’s still a month or two before shipments of P100 start, the specifications and pricing of Microway’s Tesla P100 GPU-accelerated … Continue reading

Microway joins the OpenPOWER Foundation

We’re excited to announce that Microway has joined the OpenPOWER Foundation as a Silver member. We are integrating the OpenPOWER technologies into our server systems and HPC clusters. We’re also offering our HPC software tools on OpenPOWER. The collaboration between … Continue reading

NVIDIA Tesla M40 24GB GPU Accelerator (Maxwell GM200) Up Close

NVIDIA has announced a new version of their popular Tesla M40 GPU – one with 24GB of high-speed GDDR5 memory. The name hasn’t really changed – the new GPU is named NVIDIA Tesla M40 24GB. If you are curious about … Continue reading

Intel Xeon E5-2600 v4 “Broadwell” Processor Review

Today we begin shipping Intel’s new Xeon E5-2600 v4 processors. They provide more CPU cores, more cache, faster memory access and more efficient operation. These are based upon the Intel microarchitecture code-named “Broadwell” – we expect them to be the … Continue reading

DDR4 Memory on Xeon E5-2600v3 with 3 DIMMs per channel

This week I had the opportunity to run the STREAM memory benchmark on a Microway 2U NumberSmasher server which supports up to 3 DIMMs per channel.  In practice, this system is typically configured with 768GB or 1.5TB of DDR4 memory. … Continue reading

NVIDIA Tesla M40 12GB GPU Accelerator (Maxwell GM200) Up Close

With the release of Tesla M40, NVIDIA continues to diversify its professional compute GPU lineup. Designed specifically for Deep Learning applications, the M40 provides 7 TFLOPS of single-precision floating point performance and 12GB of high-speed GDDR5 memory. It works extremely … Continue reading

Keras and Theano Deep Learning Frameworks

Here we will explore how to use the Theano and Keras Python frameworks for designing neural networks in order to accomplish specific classification tasks. In the process, we will see how Keras offers a great amount of leverage and flexibility … Continue reading

DDR4 RDIMM and LRDIMM Performance Comparison

Recently, while carrying out memory testing in our integration lab, Lead Systems Integrator, Rick Warner,  was able to clearly identify when it is appropriate to choose load-reduced DIMMs (LRDIMM) and when it is appropriate to choose registered DIMMs (RDIMM) for … Continue reading