<div dir="ltr">We're all really excited, particularly now that dual-10Gbe is starting to show up on low-cost server motherboards. This kinda reminds me of when the 100 MBit to 1 Gbe transition began happening (years ago it seems). I still have a first-run commercial gigabit switch in my pile. It's a huge box with fans and 4 ports on it. Count'm, *four* 1 Gbe ports in a box that needs fans. The transition ramp to 10Gbe seems to be running at around the same pace, with a long high-price commercial ramp leading into a period of huge cost and price reductions as the technology makes its way into the consumer space.<div><br></div><div>Throw in a little NVMe-based SSD storage and a low-cost box today easily has 100x the service capability verses just 10 years ago.<br><div><br></div><div>-Matt</div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Mar 3, 2017 at 9:43 AM, Samuel J. Greear <span dir="ltr"><<a href="mailto:sjg@evilcode.net" target="_blank">sjg@evilcode.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div class="h5"><div class="gmail_extra"><div class="gmail_quote">On Fri, Mar 3, 2017 at 12:44 AM, Sepherosa Ziehau <span dir="ltr"><<a href="mailto:sepherosa@gmail.com" target="_blank">sepherosa@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi all,<br>
<br>
Since so many folks are interested in the performance comparison, I<br>
just did one network related comparison here:<br>
<a href="https://leaf.dragonflybsd.org/~sephe/perf_cmp.pdf" rel="noreferrer" target="_blank">https://leaf.dragonflybsd.org/<wbr>~sephe/perf_cmp.pdf</a><br>
<br>
The intention is _not_ to troll, but to identify gaps, and what we can<br>
do to keep improving DragonFlyBSD.<br>
<br>
According to the comparison, we _do_ find one area DragonFlyBSD's<br>
network stack can be improved:<br>
Utilize all available CPUs for network protocol processing.<br>
<br>
Currently we only use power-of-2 CPUs to handle network protocol<br>
processing, e.g. on 24 CPUs system, only 16 CPUs will be used to<br>
handle network protocol processing. It is fine for workload involving<br>
userland applications, e.g. the HTTP server workload. But it seems<br>
forwarding can enjoy all available CPUs. I will work on this.<br>
<br>
Thanks,<br>
sephe<br>
<span class="m_-5827510701722186099gmail-HOEnZb"><font color="#888888"><br>
--<br>
Tomorrow Will Never Die<br>
</font></span></blockquote></div><br></div><div class="gmail_extra"><br></div></div></div><div class="gmail_extra">Sephe,</div><div class="gmail_extra"><br></div><div class="gmail_extra">Great work maximizing throughput while keeping the latency well bounded, this is a pretty astounding performance profile, many thumbs up.</div><div class="gmail_extra"><br></div><div class="gmail_extra"><br></div><div class="gmail_extra">Sam</div></div>
</blockquote></div><br></div>