new lwbuf api and mpsafe sf_bufs
Matthew Dillon
dillon at apollo.backplane.com
Tue Mar 10 21:36:52 PDT 2009
:If implementing caching at this level would it make sense to still
:grab a page of kva at a time and cache forever, looking through our
:hash/tree in an attempt to reuse, then looking at the end of an
:overlapping LRU queue to see if we have one with a refcnt of 0, and
:acquiring more resources if all else fails up to our maximum limit?
:With some modest pre-allocation to avoid any initial sloshing.
:
:With the duration that these allocations are held I am not convinced
:that caching buys us anything unless we cache past the free.
:
:Sam
Ok, I'll just reiterate some of what we talked about on IRC so
people reading the thread don't get confused by our skipping around.
Basically the original SFBUF system caches freed mappings which may
be reused later. The lwbuf API loses that ability. What I do like
about the lwbuf code is that it dynamically allocates the lwbufs.
The sfbufs are statically allocated and thus have scaling issues
(e.g. the example Samuel gave on IRC is when one running many parallel
sendfile()'s). I would like to see the SFBUF code use a more
dynamic model and a sysctl which sets the maximum number of *excess*
sfbufs instead of the maximum number of sfbufs.
The other part of this equation is how to optimize for MP operation.
Initially we should just use the same global index model that we
use now, though perhaps using the newly available atomic ops to
control the ref count and cpumask. As a second step we can figure
out a better model.
-Matt
Matthew Dillon
<dillon at backplane.com>
More information about the Submit
mailing list