git: hammer2 - Refactor frontend part 9/many

Matthew Dillon dillon at crater.dragonflybsd.org
Tue Jun 23 23:15:25 PDT 2015


commit c847e8387ad749d611d395742d337213aefef3b9
Author: Matthew Dillon <dillon at apollo.backplane.com>
Date:   Tue Jun 23 23:01:54 2015 -0700

    hammer2 - Refactor frontend part 9/many
    
    * Create initial frontend/backend XOP infrastructure.
    
      frontend:
    	hammer2_xop_alloc()
    	hammer2_xop_start()
    	...  hammer2_xop_collect() loop ...
    	hammer2_xop_retire(xop, HAMMER2_XOPMASK_VOP)
    
      backend:
    	(backend is called with the shared xop structure in separate
    	 backend threads for each node belonging to the cluster appropriate
    	 for the operation).
    
    	... issue chain calls as needed ...
    	... hammer2_xop_feed() ...		(feed chains back to frontend)
    	hammer2_xop_feed(NULL)			(feed NULL chain)
    	hammer2_xop_retire(xop, 1U << clindex)
    
      The XOP contains a FIFO, allowing the backend to pipeline results when
      appropriate (e.g. readdir).  If a sequence of results are expected, the
      backend should finish with a NULL chain.  If not, the backend can just feed
      back whatever is expected.  Often this will just be the chain representing
      the inode.
    
      The frontend calls hammer2_xop_collect() to collect results from all the
      backend nodes.  The collect function handles quorum validation and
      consolidates the results from a sufficient number of cluster nodes into
      a single result for the frontend.
    
    * The frontend can disconnect from the operation at any time in order to
      be able to return a result, even if backend elements are still running.
      This typically occurs when a sufficient number of nodes in the cluster
      have responded to validate the quorum.
    
      This also allows backend nodes to stall indefinitely without stalling the
      frontend.
    
    * Because frontend concurrency is lost due to the bulk of the work being
      done by the backend, the hammer2 mount code will allocate ~16 or so
      work threads per node to distribute potentially many frontend operations.
    
    * Most frontend operations use existing cache layers to retains frontend
      concurrency.  Specifically, inode meta-data access and modifications,
      and logical buffer cache operations (when cached), and cached vnodes
      via the namecache.  If the cache is not available, operations will
      wind up using the VOP/XOP infrastructure, including buffer-cache
      strategy routines (in an upcoming commit).
    
    * Implement readdir() using the new infrastructure as an initial test.
    
    * Fix an ip->meta synchronization bug related to hardlinks that was
      introduced by the ip->meta local copy work.

Summary of changes:
 sys/vfs/hammer2/TODO              |   3 +
 sys/vfs/hammer2/hammer2.h         | 134 ++++++++++--
 sys/vfs/hammer2/hammer2_chain.c   |   4 +-
 sys/vfs/hammer2/hammer2_cluster.c | 408 +++++++++++++++++++++++++++++++++-
 sys/vfs/hammer2/hammer2_inode.c   |  55 +++--
 sys/vfs/hammer2/hammer2_thread.c  | 448 ++++++++++++++++++++++++++++++++++++--
 sys/vfs/hammer2/hammer2_vfsops.c  |  56 +++--
 sys/vfs/hammer2/hammer2_vnops.c   | 124 ++++-------
 8 files changed, 1064 insertions(+), 168 deletions(-)

http://gitweb.dragonflybsd.org/dragonfly.git/commitdiff/c847e8387ad749d611d395742d337213aefef3b9


-- 
DragonFly BSD source repository



More information about the Commits mailing list