A few questions about HAMMER master/slave PFS
Laurent Vivier
laurent at lamzi.com
Mon Aug 27 09:08:10 PDT 2018
Hello DFlyers,
I am running DragonFly 5.2.2 as an NFS server with 2x 2TB LUKS-backed HDD's with HAMMER1 v7 as FS in a PFS master/slave mirror-stream setup and it's been working great so far :)
The setup looks like this :
Disk1 -> LUKS -> HAMMER1_2TB -> PFS# 0 (root)
Disk2 -> LUKS -> HAMMER_SLAVE -> PFS# 0 (root) + PFS 1 (slave to HAMMER1_2TB / PFS# 0)
Now that I am using the system for a little while, I have a few questions regarding its behavior :
1) I realized that the HAMMER slave PFS has several snapshots (not created by me) that seems seemingly impossible to remove e.g
Hikaeme# hammer info /HAMMER_SLAVE
Volume identification
Label hammer1_secure_slave
No. Volumes 1
HAMMER Volumes /dev/mapper/knox2
Root Volume /dev/mapper/knox2
FSID 0198767f-7139-11e8-9608-6d626d258b95
HAMMER Version 7
Big-block information
Total 238335
Used 192009 (80.56%)
Reserved 32 (0.01%)
Free 46294 (19.42%)
Space information
No. Inodes 35668
Total size 1.8T (1999298887680 bytes)
Used 1.5T (80.56%)
Reserved 256M (0.01%)
Free 362G (19.42%)
PFS information
PFS# Mode Snaps
0 MASTER 0 (root PFS)
1 SLAVE 3
Hikaeme# hammer snapls /HAMMER_SLAVE/pfs/hanma
Snapshots on /HAMMER_SLAVE/pfs/hanma PFS#1
Transaction ID Timestamp Note
0x00000001034045c0 2018-07-04 18:19:42 CEST -
0x00000001034406c0 2018-07-09 19:28:04 CEST -
0x000000010383bc30 2018-08-12 10:51:07 CEST -
Hikaeme# hammer snaprm 0x00000001034045c0
hammer: hammer snaprm 0x00000001034045c0: Operation not supported
My question here is should I worry about it/is that an intended behavior ?
2) When executing hammer info and looking at the used space between master and slave PFS, I have quite a big difference (22GB, even after running hammer cleanup)
Hikaeme# hammer info
Volume identification
Label HAMMER1_2TB
No. Volumes 1
HAMMER Volumes /dev/mapper/knox
Root Volume /dev/mapper/knox
FSID 81e9d5eb-6be7-11e8-802d-6d626d258b95
HAMMER Version 7
Big-block information
Total 238335
Used 194636 (81.66%)
Reserved 32 (0.01%)
Free 43667 (18.32%)
Space information
No. Inodes 35665
Total size 1.8T (1999298887680 bytes)
Used 1.5T (81.66%)
Reserved 256M (0.01%)
Free 341G (18.32%)
PFS information
PFS# Mode Snaps
0 MASTER 0 (root PFS)
Volume identification
Label hammer1_secure_slave
No. Volumes 1
HAMMER Volumes /dev/mapper/knox2
Root Volume /dev/mapper/knox2
FSID 0198767f-7139-11e8-9608-6d626d258b95
HAMMER Version 7
Big-block information
Total 238335
Used 191903 (80.52%)
Reserved 32 (0.01%)
Free 46400 (19.47%)
Space information
No. Inodes 35668
Total size 1.8T (1999298887680 bytes)
Used 1.5T (80.52%)
Reserved 256M (0.01%)
Free 363G (19.47%)
PFS information
PFS# Mode Snaps
0 MASTER 0 (root PFS)
1 SLAVE 3
Is that something I should be worried about too ? As far as I can tell the replication of new files from master and slave works great, I can see the new files on the slave PFS fairly quickly.
Wishing you all a good day,
Laurent
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.dragonflybsd.org/pipermail/users/attachments/20180827/79935be3/attachment.html>
More information about the Users
mailing list