Performance Characteristics of Common Network Fabrics

Revision for “Performance Characteristics of Common Network Fabrics” created on March 9, 2022 @ 12:08:51

TitleContentExcerpt
Performance Characteristics of Common Network Fabrics
<h2>Ethernet</h2>
Performance of ethernet networks varies widely. Factors include the switch and NIC manufacturer, firmware settings and system/software settings. Physical layer even plays a role: 10 GigE over RJ45 (10GBase-T) has a higher latency than SFP+ Direct-Attach copper.

Contact our experts to determine a configuration which meets your requirements.
<table>
<thead>
<tr>
<th>Data Rate</th>
<th>Theoretical Bandwidth (unidirectional)</th>
<th>End-to-End Latency</th>
<th>Technology</th>
</tr>
</thead>
<tbody>
<tr>
<td>Gigabit Ethernet</td>
<td>125 MB/s</td>
<td>25 ~ 65 microseconds</td>
<td>&nbsp;</td>
</tr>
<tr>
<td>10G Ethernet</td>
<td>1.25 GB/s</td>
<td>1.3 microseconds (RDMA application)
4 microseconds (sockets application)</td>
<td>Mellanox ConnectX-3 VPI</td>
</tr>
<tr>
<td>40G Ethernet</td>
<td>5 GB/s</td>
<td>1.3 microseconds (RDMA application)
4 microseconds (sockets application)</td>
<td>Mellanox ConnectX-3 VPI</td>
</tr>
</tbody>
</table>

<hr />

<h2>InfiniBand and Omni-Path Fabrics</h2>
Although these fabrics typically offer the highest throughput and lowest latency, much depends on the configuration of the fabric and the method in which your software application accesses the fabric. The figures below describe the best possible performance – contact one of our experts to learn more.

The MPI Bandwidths are measured with large messages and MVAPICH2 MPI. The end-to-end latencies are measured with small messages and presume a single switch connecting two host adapters. Each additional switch hop will add latency (see table below).

<table>
<thead>
<tr>
<th>Data Rate</th>
<th>MPI Bandwidth (unidirectional)</th>
<th>End-to-End Latency</th>
<th>Generation</th>
</tr>
</thead>
<tbody>
<tr>
<td>10Gb/s SDR</td>
<td>1 GB/s</td>
<td>2.6 microseconds</td>
<td>Mellanox InfiniHost III</td>
</tr>
<tr>
<td>20Gb/s DDR</td>
<td>2 GB/s</td>
<td>2.6 microseconds</td>
<td>Mellanox InfiniHost III</td>
</tr>
<tr>
<td>40Gb/s QDR</td>
<td>4 GB/s</td>
<td>1.07 microseconds</td>
<td>Mellanox ConnectX-3</td>
</tr>
<tr>
<td>40Gb/s FDR-10</td>
<td>5.16 GB/s</td>
<td>1.07 microseconds</td>
<td>Mellanox ConnectX-3</td>
</tr>
<tr>
<td>56Gb/s FDR</td>
<td>6.82 GB/s</td>
<td>1.07 microseconds</td>
<td>Mellanox ConnectX-3</td>
</tr>
<tr>
<td>100Gb/s EDR</td>
<td>12.08 GB/s</td>
<td>1.01 microseconds</td>
<td>Mellanox ConnectX-4</td>
</tr>
<tr>
<td>100Gb/s Omni-Path</td>
<td>12.36 GB/s</td>
<td>1.04 microseconds</td>
<td>Intel 100G Omni-Path</td>
</tr>
</tbody>
</table>
<hr />

Larger fabrics require that multiple switches be connected to provide service to all nodes. In such a fabric, each additional switch hop adds a small amount of latency.
<table>
<thead>
<tr>
<th>Data Rate</th>
<th>Hop Latency</th>
<th>Generation</th>
</tr>
</thead>
<tbody>
<tr>
<td>40Gb/s QDR</td>
<td>0.10 microseconds</td>
<td>Mellanox InfiniScale IV</td>
</tr>
<tr>
<td>56Gb/s FDR</td>
<td>0.20 microseconds</td>
<td>Mellanox SwitchX-2</td>
</tr>
<tr>
<td>100Gb/s EDR</td>
<td>0.09 microseconds</td>
<td>Mellanox Switch-IB</td>
</tr>
<tr>
<td>100Gb/s Omni-Path</td>
<td>0.10 microseconds</td>
<td>Intel 100G Omni-Path</td>
</tr>
<tr>
<td>200Gb/s HDR</td>
<td><0.09 microseconds</td>
<td>Mellanox Quantum</td>
</tr>
</tbody>
</table>
<hr />

<em>See also: <a href="https://www.microway.com/knowledge-center-articles/performance-characteristics-of-common-transports-buses/" title="Performance Limits & Bottlenecks of Common Transports/Buses">Performance Characteristics of Common Transports and Buses</a></em>



Old New Date Created Author Actions
March 9, 2022 @ 12:08:51 Brett Newman
December 23, 2016 @ 12:16:48 Eliot Eshelman
December 23, 2016 @ 12:16:20 [Autosave] Eliot Eshelman
August 7, 2013 @ 12:12:23 Eliot Eshelman
August 7, 2013 @ 12:11:47 Eliot Eshelman
August 7, 2013 @ 12:06:15 Eliot Eshelman
August 7, 2013 @ 12:03:33 Eliot Eshelman
August 7, 2013 @ 12:02:20 Eliot Eshelman
August 7, 2013 @ 11:54:18 Eliot Eshelman
August 7, 2013 @ 11:33:18 Eliot Eshelman
August 7, 2013 @ 11:27:32 Eliot Eshelman

Comments are closed.