HAMMER update 06-Feb-2008
Michael Neumann
mneumann at ntecs.de
Wed Feb 6 16:02:27 PST 2008
Simon 'corecode' Schubert wrote:
Matthew Dillon wrote:
* Implement the filesystem as one big huge circular FIFO, pretty much
laid down linearly on disk, with a B-Tree to locate and access
data.
* Random modifications (required for manipulating the B-Tree and
marking
records as deleted) will append undo records to this FIFO and
the only
write ordering requirement will be that the buffers containing
these
undo records be committed before the buffers containing the
random modifications are committed.
This sounds quite like LFS now. LFS however split the volume in smaller
blocks which could be "used", "empty" or "open", IIRC. Their background
cleaner then could push remaining data from used blocks to a currently
open one, marking the block "empty" after that, allowing the FS to write
to the blocks again.
How about "Generational Garbarge Collection"? Assuming that there are
some files that will never be deleted this could give slighly better
performance.
Keep a "copy count" (a copy occurs if the cleaner has to copy data from
the left end to the right end of the FIFO). If that increases over, say
3, copy it into the old generation FIFO.
One problem of course is how to dimension each generation, and how many
to use.
I think that's basically how LFS works.
Regards,
Michael
More information about the Kernel
mailing list