Patch for inode SLIST conversion

Michael Neumann mneumann at ntecs.de
Tue Feb 5 05:31:30 PST 2008


Matthew Dillon wrote:
:That's the perfect solution!
:
:Patch appended. _lwkt_trytokref needs the same change, as it might 
:acquire the token as well. I think setting lastref = NULL in 
:lwkt_reltoken is also not absolutely neccessary, but it's not wrong
:either, so I did it.
:
:Regards,
:
:   Michael

    Looks pretty good.  It's almost ready for commit.  I see one bug and
    two implementation issues in lwkt_getalltokens().
    The 'if (tok->t_lastref != refs) tok->t_lastref = NULL' case should
    only occur inside the 'if (tok->t_owner != td)' conditional.  That 
    is, if the same thread has acquired the same token several times
    (each with its own ref), only the most recent acquisition (the first
    one found on the td_toks list) should have the t_lastref logic.
    Otherwise the remaining acquisitions will cause it to be NULL'd out
    every time.

    The first implementation issue is the UP version.  The non-SMP code
    does not have a lwkt_getalltokens() procedure so the staleness is
    not being tracked at all.  We need to create a UP lwkt_getalltokens()
    procedure as well whos only job is to run through and check t_lastref.
    We may have to implement t_owner for UP too.
    The second implementation issue has to do with recursive acquisition
    of the same token.  The current t_lastref implementation will cause
    the deeper acquisition to 'stale-out' the higher level acquisition.
    It could be argued that if you have procedure A acquire a token and
    it calls procedure B which, unknown to procedure A also needs the token,
    then procedure A should detect the temporary loss of control over
    the structures represented by the token by seeing that it has become
    stale.  Your current implementation has this effect.
I'm unsure if this is a common usage scenario. If we call a function we 
 usually know their side-effects.

So it's a decision if we want staleness based on thread granularity (a 
token gets stale whenever a distinct thread acquires the token) or on a 
tokref granularity (each distinct tokref will stale out the token).
I find the thread granularity more natural, just because tokrefs confuse 
my mind more than threads.



I might have discovered a race in the UP version. lwkt_reltoken removes 
the tokref from the thread's token list:

    for (scanp = &td->td_toks; *scanp != ref;
      scanp = &((*scanp)->tr_next))
        ;
    *scanp = ref->tr_next;
Afterwards it modifies it's globalcount:

    --tok->t_globalcount;

Okay it's very unlikely that the decrement operation is preempted, but 
imagine it's preempted after reading the current t_globalcount into a 
register, right before decrementing it and storing it back to memory. 
Imagine the preempting thread now calls lwkt_gettokref and executes this:

    while ((td = td->td_preempted) != NULL) {
        for (scan = td->td_toks; scan; scan = scan->tr_next) {
            if (scan->tr_tok == tok) {
                lwkt_yield();
                return;
            }
        }
    }
As the token was removed from the preempted thread before, the 
preempting thread won't find the token in the list and thinks "I can 
safely acquire it, as no preempted thread helds it". It then
goes on and increments the globalcount:

    /* NOTE: 'td' invalid after loop */
    ++tok->t_globalcount;
Bang! We lost an incrementation when the preempted thread resumes. This 
is no problem as t_globalcount is purely for debugging, but it's 
dangerous when additional code (like the staleness code) is added.

Disclaimer: Please prove that I'm am completely wrong here ;-)

A solution is to move the code that removes the token from the thread's 
list (in lwkt_reltoken) after any token modifying code. That should be 
safe as the store:

    *scanp = ref->tr_next;

should be atomic.

The MP version doesn't have these problems due to spinlocks that prevent 
a preempting thread from accessing the token.

So I add a t_owner in the UP version as well. I'd also like to replace 
t_globalcount by t_count. This will simplify the code even more.

Regards,

  Michael





More information about the Kernel mailing list