git: sbin/hammer: Use big-block append offset to limit recovery scan range

Tomohiro Kusumi tkusumi at crater.dragonflybsd.org
Mon Dec 12 08:56:11 PST 2016


commit 79b114a050a39e17e910fcccc7396c345d91e9cb
Author: Tomohiro Kusumi <kusumi.tomohiro at gmail.com>
Date:   Mon Dec 12 15:44:55 2016 +0900

    sbin/hammer: Use big-block append offset to limit recovery scan range
    
    This commit is to fix a remaining issue mentioned in e3cefcca,
    which recovers irrelevant files from old filesystem even with the
    scan range limit introduced by e3cefcca and quick scan mode
    introduced by e819b271.
    
    As shown in an example below, whenever a filesystem is recreated
    and the current one uses less space than the old filesystem, the
    command is likely to recover files from old filesystem (even with
    e3cefcca and e819b271), because B-Tree big-blocks could have nodes
    from old filesystem after their append offset, especially if the
    block is the last one in B-Tree zone.
    
    In order to avoid recovery of irrelevant files, the command needs
    to check if scanning offset is beyond append offset of the B-Tree
    big-block that contains this offset, and ignore all nodes beyond
    the append offset. [*] shows this situation. Note that the append
    offset is checked only if layer1/2 entries that point to this
    B-Tree big-block have good CRC result.
    
    This applies to both default and quick scan mode, but not to full
    scan mode. Full scan scans everything no matter what.
    
    --------------------------------------------------------> offset
    |--------------------------------------------------| volume size
    |<----------------------------------------->|        previously used
                                                |<---->| previously unused
    |<----------------------------------->|              currently used
                                          |<---------->| currently unused
    
                        ... -------------------------->| full scan
                     ... ---------------->|              default scan
     ... --->||<------->||<------->||<--->|              default scan [*]
     ... |<-->| ... |<-->| ... |<-->|                    quick scan
     ... |<->|  ... |<->|  ... |<->|                     quick scan [*]
    
    ===== comparison of recovered files
    1. Zero clear the first 1GB of /dev/da1.
     # dd if=/dev/zero of=/dev/da1 bs=1M count=1K
     1024+0 records in
     1024+0 records out
     1073741824 bytes transferred in 2.714761 secs (395519867 bytes/sec)
    
    2. Create a filesystem and clone 968MB dragonfly source.
     # newfs_hammer -L TEST /dev/da1 > /dev/null
     # mount_hammer /dev/da1 /HAMMER
     # cd /HAMMER
     # git clone /usr/local/src/dragonfly > /dev/null 2>&1
     # du -sh .
     968M    .
     # cd
     # umount /HAMMER
    
    3. Create a filesystem again with 1 regular file.
     # newfs_hammer -L TEST /dev/da1 > /dev/null
     # mount_hammer /dev/da1 /HAMMER
     # cd /HAMMER
     # ls -l
     total 0
     # echo test > test
     # cat ./test
     test
     # cd
     # umount /HAMMER
    
    4-1. Recover a filesystem assuming it only has 1 regular file.
     # rm -rf /tmp/a
     # hammer -f /dev/da1 recover /tmp/a recover > /dev/null
     # cat /tmp/a/PFS00000/test
     test
     # tree /tmp/a | wc -l
        19659
     # du -a /tmp/a | grep obj_0x | wc -l
        19661
    
    4-2. Do the same as 4-1 using this commit.
     # rm -rf /tmp/b
     # hammer -f /dev/da1 recover /tmp/b recover > /dev/null
     # cat /tmp/b/PFS00000/test
     test
     # tree /tmp/b
     /tmp/b
     `-- PFS00000
         `-- test
    
     1 directory, 1 file
     #

Summary of changes:
 sbin/hammer/cmd_recover.c | 51 ++++++++++++++++++++++++++++++++++++-----------
 1 file changed, 39 insertions(+), 12 deletions(-)

http://gitweb.dragonflybsd.org/dragonfly.git/commitdiff/79b114a050a39e17e910fcccc7396c345d91e9cb


-- 
DragonFly BSD source repository


More information about the Commits mailing list