HAMMER update 10-Feb-2008

Bill Hacker wbh at conducive.org
Mon Feb 11 00:08:29 PST 2008

Matthew Dillon wrote:
    HAMMER is really shaping up now. Here's what works now:

    I have already run some tests with regards to the blockmap allocation
    model and it looks very good.   What I did was implement an array of
    blockmap entry structures rather then just an array of pointers to the
    actual physical big-blocks.  The blockmap entry structure not only has
    a pointer to the underlying physical big-block, it also has a
    bytes_free field which specifies how many bytes in the underlying
    big-block are free.
    This is the only tracking done by the blockmap.  It does not actually
    try to track WHERE in the big-block the free areas are... figuring
    that out will be up to the cleaning code.  What this gives us is the
    * Extremely fast freeing of on-disk storage elements.  The target
      physical block doesn't have to be read or written, only the governing
      blockmap entry.  With 8MB big-blocks and 32-byte blockmap entries one
      16K buffer can track 4GB worth of underlying storage, which means
      that freeing large amounts of sparse information does not cause the
      disk to seek all over the place.

Struggling today with a situation wherein 82 Giga-bytes of data were 
moved into an IMAP trash folder on UFS2, outrunning inodes, names et al 
before disk space (plenty of that left) and a cleanup that gets:

/bin/rm: Argument list too long

Unless I script it into manageable chunks....

If HAMMER fs has a better, even mechanism - even a rahter BFBI one, to 
handle that sort of need for massive deletions, it will make a convert here.


More information about the Kernel mailing list