A few questions about HAMMER master/slave PFS

Michael Neumann mneumann at ntecs.de
Mon Aug 27 13:23:41 PDT 2018


On Mon, Aug 27, 2018 at 06:08:10PM +0200, Laurent Vivier wrote:
> Hello DFlyers,
> 
> I am running DragonFly 5.2.2 as an NFS server with 2x 2TB LUKS-backed HDD's with HAMMER1 v7 as FS in a PFS master/slave mirror-stream setup and it's been working great so far :)
> 
> The setup looks like this :
> 
> Disk1 -> LUKS -> HAMMER1_2TB -> PFS# 0 (root) 
> Disk2 -> LUKS -> HAMMER_SLAVE -> PFS# 0 (root) + PFS 1 (slave to HAMMER1_2TB / PFS# 0)
> 
> Now that I am using the system for a little while, I have a few questions regarding its behavior :
> 
> 1) I realized that the HAMMER slave PFS has several snapshots (not created by me) that seems seemingly impossible to remove e.g

HAMMER1 uses fine-grained snapshots, which means that it basically
automatically creates an "unnamed" snapshot whenever it flushes
something to disk (roughly every 30 seconds). Usually, you don't want to
keep all these fine-grained snapshots and instead keep one snapshot per
day (or one per week...). This is what "hammer cleanup" does. You can
configure it's history retention policy by running "hammer viconfig".
>From the man page of "hammer cleanup":

                   snapshots  1d 60d  # 0d 0d  for PFS /tmp, /var/tmp, /usr/obj
                   prune      1d 5m
                   rebalance  1d 5m
                   #dedup      1d 5m  # not enabled by default
                   reblock    1d 5m
                   recopy     30d 10m

This means, when you run "hammer cleanup", it takes one snapshot every
day, and retains the last 60 daily snapshots. hammer cleanup performs
other tasks, for instance pruning (1d = every day, for 5 minutes).
Pruning deletes all the intermediate fine-grained snapshots between the
"named" daily snapshots. It also rebalances the B-tree, dedups, reblocks
and recopies. These are all operations to optimize performance. Dedup is
to save space. 

If you want to delete snapshots, just change "snapshots 1d 60d" to, for
instance, "snapshots 1d 7d" and run "hammer cleanup". If you want to
delete all historical data, you might use "hammer prune-everything", but
be careful and read the man page!!!

One nice feature of HAMMER1 is that the master PFS and slave PFS can
have different history retention policies in place.

> 
> Hikaeme# hammer info /HAMMER_SLAVE
> Volume identification
> ?????? Label???????????????????????????? hammer1_secure_slave
> ?????? No. Volumes???????????????? 1
> ?????? HAMMER Volumes?????????? /dev/mapper/knox2
> ?????? Root Volume???????????????? /dev/mapper/knox2
> ?????? FSID?????????????????????????????? 0198767f-7139-11e8-9608-6d626d258b95
> ?????? HAMMER Version?????????? 7
> Big-block information
> ?????? Total?????????????????? 238335
> ?????? Used???????????????????? 192009 (80.56%)
> ?????? Reserved???????????????????? 32 (0.01%)
> ?????? Free?????????????????????? 46294 (19.42%)
> Space information
> ?????? No. Inodes?????????? 35668
> ?????? Total size???????????? 1.8T (1999298887680 bytes)
> ?????? Used???????????????????????? 1.5T (80.56%)
> ?????? Reserved???????????????? 256M (0.01%)
> ?????? Free???????????????????????? 362G (19.42%)
> PFS information
> ?????? ?? PFS#?? Mode?????? Snaps
> ?????? ???????? 0?? MASTER?????????? 0 (root PFS)
> ?????? ???????? 1?? SLAVE???????????? 3
> Hikaeme# hammer snapls /HAMMER_SLAVE/pfs/hanma
> Snapshots on /HAMMER_SLAVE/pfs/hanma?????? PFS#1
> Transaction ID?????? ?????? Timestamp?????? ?????? Note
> 0x00000001034045c0?????? 2018-07-04 18:19:42 CEST?????? -
> 0x00000001034406c0?????? 2018-07-09 19:28:04 CEST?????? -
> 0x000000010383bc30?????? 2018-08-12 10:51:07 CEST?????? -
> Hikaeme# hammer snaprm 0x00000001034045c0
> hammer: hammer snaprm 0x00000001034045c0: Operation not supported

Have you tried hammer snaprm /HAMMER_SLAVE/pfs/hanma@@0x00000001034045c0

> My question here is should I worry about it/is that an intended behavior ? 
> 
> 2) When executing hammer info and looking at the used space between master and slave PFS, I have quite a big difference (22GB, even after running hammer cleanup)
> 
> Hikaeme# hammer info
> Volume identification
> ?????? Label???????????????????????????? HAMMER1_2TB
> ?????? No. Volumes???????????????? 1
> ?????? HAMMER Volumes?????????? /dev/mapper/knox
> ?????? Root Volume???????????????? /dev/mapper/knox
> ?????? FSID?????????????????????????????? 81e9d5eb-6be7-11e8-802d-6d626d258b95
> ?????? HAMMER Version?????????? 7
> Big-block information
> ?????? Total?????????????????? 238335
> ?????? Used???????????????????? 194636 (81.66%)
> ?????? Reserved???????????????????? 32 (0.01%)
> ?????? Free?????????????????????? 43667 (18.32%)
> Space information
> ?????? No. Inodes?????????? 35665
> ?????? Total size???????????? 1.8T (1999298887680 bytes)
> ?????? Used???????????????????????? 1.5T (81.66%)
> ?????? Reserved???????????????? 256M (0.01%)
> ?????? Free???????????????????????? 341G (18.32%)
> PFS information
> ?????? ?? PFS#?? Mode?????? Snaps
> ?????? ???????? 0?? MASTER?????????? 0 (root PFS)
> 
> Volume identification
> ?????? Label???????????????????????????? hammer1_secure_slave
> ?????? No. Volumes???????????????? 1
> ?????? HAMMER Volumes?????????? /dev/mapper/knox2
> ?????? Root Volume???????????????? /dev/mapper/knox2
> ?????? FSID?????????????????????????????? 0198767f-7139-11e8-9608-6d626d258b95
> ?????? HAMMER Version?????????? 7
> Big-block information
> ?????? Total?????????????????? 238335
> ?????? Used???????????????????? 191903 (80.52%)
> ?????? Reserved???????????????????? 32 (0.01%)
> ?????? Free?????????????????????? 46400 (19.47%)
> Space information
> ?????? No. Inodes?????????? 35668
> ?????? Total size???????????? 1.8T (1999298887680 bytes)
> ?????? Used???????????????????????? 1.5T (80.52%)
> ?????? Reserved???????????????? 256M (0.01%)
> ?????? Free???????????????????????? 363G (19.47%)
> PFS information
> ?????? ?? PFS#?? Mode?????? Snaps
> ?????? ???????? 0?? MASTER?????????? 0 (root PFS)
> ?????? ???????? 1?? SLAVE???????????? 3
> 
> Is that something I should be worried about too ? As far as I can tell the replication of new files from master and slave works great, I can see the new files on the slave PFS fairly quickly.

It's hard to read your above output, as there are too many "???". Note
that HAMMER1 mirroring does not operate on the block level, but on the
logical level. So both master and slave PFSs are separate file systems
and their underlying disk blocks are differently organized.

If you have different history retention policies on slave and master,
this might lead to different file system sizes. This is not an issue.
Just configure it using "hammer viconfig" and run "hammer cleanup".

Best regards,

  Michael




More information about the Users mailing list