dragonfly system backups

Matthew Dillon dillon at apollo.backplane.com
Mon Apr 9 07:47:06 PDT 2007

:Dmitri Nikulin wrote:
:> Is there a really good reason not to tar the backups? It seems
:> impossible to have 50 million tarballs/days of backups, so I infer
:> that you're just doing a straight copy. Or maybe now's the time to
:> homebrew a more intelligent backup system. Or just make the
:> journalling layer useful for backups.
:matt hardlinks files which didn't change since the last N-level run.  thi=
:s way there is always a full directory structure available.  it's fairly =
:straight forward *and* intelligent, just our FS isn't too happy about tha=
:  simon

    Yup.  Hardlinks don't create extra inodes.  I think what is going on
    is that since directories have to be duplicated for each
    day (even if the files in them are mostly hardlinks), large directory
    trees like in CVS are causing the number of inodes to bloat up.

    Last count before I had to wipe the FS there were something like
    30 million directories out of the 50 million inodes. 

    It's really the number of directories that is gumming up fsck the 
    most.  fsck has to allocate memory to keep track of them all and
    is running out of user address space AND running out of system swap.
    Plus it takes 12 hours to run.  heh.


More information about the Kernel mailing list