new lwbuf api and mpsafe sf_bufs

Samuel J. Greear sjg at evilcode.net
Tue Mar 10 17:49:22 PDT 2009


On Tue, Mar 10, 2009 at 5:28 PM, Matthew Dillon
<dillon at apollo.backplane.com> wrote:
>    Hmm.  If I understand this correctly you are removing SFB_CPUPRIVATE
>    from the SFBUF API and splitting the entire set of API procedures out
>    for private-cpu use into a new LWBUF API?
>
>    It looks like you moved the cpumask mmu syncing to lwbuf_kva(),
>    but I think I see a bug in lwbuf_alloc().
>
>    From my reading of the code lwbuf_alloc() is not able to reuse
>    the KVA mappings for previously cached LWBUF's because it is
>    unconditionally changing the vm_page (lwb->m = m; pmap_kenter_quick...).
>    Because of the pmap change, lwb->cpumask would have to be set to
>    gd->gd_cpumask, NOT or'd with the previous mask.
>
>    i.e. the bug is:
>
>        lwb->cpumask |= gd->gd_cpumask;
>
>    It needs to be:
>
>        lwb->cpumask = gd->gd_cpumask;
>

Thanks.

>    If you want to separate out private mappings like that I think the
>    LWBUFs have to cached just like they are with SFBUFs.  I like the
>    use of the objcache but I think the LWBUF code needs to retain
>    a sysctl-programmable number of lwb elements in a RBTREE and try to
>    reuse the mappings.  i.e. it needs to be a bit more sophisticated
>    then your current design.
>
>                                        -Matt
>                                        Matthew Dillon
>                                        <dillon at backplane.com>
>

If implementing caching at this level would it make sense to still
grab a page of kva at a time and cache forever, looking through our
hash/tree in an attempt to reuse, then looking at the end of an
overlapping LRU queue to see if we have one with a refcnt of 0, and
acquiring more resources if all else fails up to our maximum limit?
With some modest pre-allocation to avoid any initial sloshing.

With the duration that these allocations are held I am not convinced
that caching buys us anything unless we cache past the free.

Sam





More information about the Submit mailing list