git: kernel - Reduce spinning on shared spinlocks

Matthew Dillon dillon at
Sun Dec 4 09:41:14 PST 2016

commit 01be7a8f282aa7c6d8ac2e383e81d094d83f9bf9
Author: Matthew Dillon <dillon at>
Date:   Sun Dec 4 09:10:25 2016 -0800

    kernel - Reduce spinning on shared spinlocks
    * Improve spinlock performance by removing unnecessary extra reads,
      using atomic_fetchadd_int() to avoid a cmpxchg loop, and allowing
      the SHARED flag to remain soft-set on the 1->0 transition.
    * The primary improvement here is that multiple cpu's obtaining the
      same shared spinlock can now do so via a single atomic_fetchadd_int(),
      whereas before we had multiple atomics and cmpxchg loops.  This does not
      remove the cacheline ping-pong but it significantly reduces unnecessary
      looping when multiple cpu cores are heavily loading the same shared spin
    * Trade-off is against the case where a spinlock's use-case switches from
      shared to exclusive or back again, which requires an extra atomic op to
      deal with.  This is not a common case.
    * Remove spin->countb debug code, it interferes with hw cacheline operations
      and is no longer desireable.
    Discussed-with: Mateusz Guzik (mjg_)

Summary of changes:
 sys/kern/kern_spinlock.c | 40 ++++++++++++++--------------------------
 sys/sys/spinlock2.h      | 37 +++++++++++++++++++++----------------
 2 files changed, 35 insertions(+), 42 deletions(-)

DragonFly BSD source repository

More information about the Commits mailing list