10G network netperf performance (ix)
sepherosa at gmail.com
Fri Mar 14 05:11:07 PDT 2014
On Fri, Mar 14, 2014 at 8:02 PM, Sepherosa Ziehau <sepherosa at gmail.com> wrote:
>> Thanks, could you post TCP_RR data ?
> I am not sure whether TCP_RR is really useful, since each process is
> working on one socket. However, I have some statistics for
> tools/tools/netrate/accept_connect/kq_connect_client. It is doing
> 273Kconns/s (tcp connections, 8 processes, each tries to create 128
> connections). The server side is
> tools/tools/netrate/accept_connect/kq_accept_server (run w/ -r, i.e.
> SO_REUSEPORT). MSL is set to 10ms for the testing network and
> net.inet.ip.portrange.last is set to 40000.
Forgot to mention, the above kq_connect_client/kq_accept_server test
is using ix0 on A and ix0 on B. ix1 on A and B are not used in this
test. A (runs client) already consumes all of it cpu time on ix0, as
I said below :).
> When doing 273Kconns/s, client side consumes 100% cpu (system is still
> responsive though), mainly contended on tcp_port_token (350K
> contentions/s on each CPU). Server side has ~45% idle time on each
> CPU; contention is pretty low, mainly ip_id spinlock.
> The tcp_port_token contention is one of the major causes that we can't
> push 335Kconns/s by _one_ client. Another cause is computational cost
> of software toeplitz on client side. On server side, toeplitz hash is
> calculated by hardware. I am currently working on reducing
> tcp_port_token contention.
> Best Regards,
> Tomorrow Will Never Die
Tomorrow Will Never Die
More information about the Users