HAMMER UPDATE 10-Jun-2008 (HEADS UP, MEDIA CHANGE!)

Matthew Dillon dillon at apollo.backplane.com
Tue Jun 10 17:18:59 PDT 2008


    I have made another change to the HAMMER media structures.  I 
    determined that the B-Tree was using too small a radix so I bumped
    it up from 16 to 64.  A full recompile of the HAMMER filesystem and
    its utilities, including newfs_hammer, is required, and any HAMMER
    filesystems must be re-new-FS'd (sorry, that's the way it goes, it's
    still under development).  I pick up my foot :-)

    WARNING!  Another media change will occur in the next day or two as
    well!

    As of commit 53H I believe I have fixed all remaining bugs.  BUT (always
    a but!)... I added an optimization to the B-Tree code that needs
    to testing so you may see some follow-up commits if it turns out I
    blew the optimization.  The optimization is to not do a linear scan
    of a B-Tree node's elements.  That was fine when there were 16 elements
    but now that there are 64 I changed it to do a power-of-2 narrowing 
    scan.

    Performance is coming along nicely.  I've made some progress and tests
    such as blogbench show tantilizing possibilities.  HAMMER currently has
    an issue with a backlog of dirty inodes building up and screwing up
    performance for long-running tests.  I would have posted this message
    before but I screwed up the gpt partition on my raid-1 (it wasn't
    aligned to the stripe size), so all my tests blew up in my face.
    I will post a follow-up tonight once I whack a few more performance
    issues.

    I will be fixing sequential write performance issues tonight sometime.
    That turned out to be two issues.  First, the record limit is set
    absurdly low and causing unnecessary flushes.  Second, when the write
    sees that the records have hit their limit it does a complete flush of
    the inode before letting more writes through, when it should only need
    to wait for the record count to fall below the limit.

						-Matt






More information about the Kernel mailing list