<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"><html>
<head>
<meta name="Generator" content="Zarafa WebAccess v7.1.4-41394">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<title>A few questions about HAMMER master/slave PFS</title>
<style type="text/css">
body
{
font-family: Arial, Verdana, Sans-Serif ! important;
font-size: 12px;
padding: 5px 5px 5px 5px;
margin: 0px;
border-style: none;
background-color: #ffffff;
}
p, ul, li
{
margin-top: 0px;
margin-bottom: 0px;
}
</style>
</head>
<body>
<p>Hello DFlyers,<br /><br />I am running DragonFly 5.2.2 as an NFS server with 2x 2TB LUKS-backed HDD's with HAMMER1 v7 as FS in a PFS master/slave mirror-stream setup and it's been working great so far :)<br /><br />The setup looks like this :<br /><br />Disk1 -> LUKS -> HAMMER1_2TB -> PFS# 0 (root) <br />Disk2 -> LUKS -> HAMMER_SLAVE -> PFS# 0 (root) + PFS 1 (slave to HAMMER1_2TB / PFS# 0)<br /><br />Now that I am using the system for a little while, I have a few questions regarding its behavior :<br /><br />1) I realized that the HAMMER slave PFS has several snapshots (not created by me) that seems seemingly impossible to remove e.g<br /><br />Hikaeme# hammer info /HAMMER_SLAVE<br />Volume identification<br /> Label hammer1_secure_slave<br /> No. Volumes 1<br /> HAMMER Volumes /dev/mapper/knox2<br /> Root Volume /dev/mapper/knox2<br /> FSID 0198767f-7139-11e8-9608-6d626d258b95<br /> HAMMER Version 7<br />Big-block information<br /> Total 238335<br /> Used 192009 (80.56%)<br /> Reserved 32 (0.01%)<br /> Free 46294 (19.42%)<br />Space information<br /> No. Inodes 35668<br /> Total size 1.8T (1999298887680 bytes)<br /> Used 1.5T (80.56%)<br /> Reserved 256M (0.01%)<br /> Free 362G (19.42%)<br />PFS information<br /> PFS# Mode Snaps<br /> 0 MASTER 0 (root PFS)<br /> 1 SLAVE 3<br />Hikaeme# hammer snapls /HAMMER_SLAVE/pfs/hanma<br />Snapshots on /HAMMER_SLAVE/pfs/hanma PFS#1<br />Transaction ID Timestamp Note<br />0x00000001034045c0 2018-07-04 18:19:42 CEST -<br />0x00000001034406c0 2018-07-09 19:28:04 CEST -<br />0x000000010383bc30 2018-08-12 10:51:07 CEST -<br />Hikaeme# hammer snaprm 0x00000001034045c0<br />hammer: hammer snaprm 0x00000001034045c0: Operation not supported<br /><br />My question here is should I worry about it/is that an intended behavior ? <br /><br />2) When executing hammer info and looking at the used space between master and slave PFS, I have quite a big difference (22GB, even after running hammer cleanup)<br /><br />Hikaeme# hammer info<br />Volume identification<br /> Label HAMMER1_2TB<br /> No. Volumes 1<br /> HAMMER Volumes /dev/mapper/knox<br /> Root Volume /dev/mapper/knox<br /> FSID 81e9d5eb-6be7-11e8-802d-6d626d258b95<br /> HAMMER Version 7<br />Big-block information<br /> Total 238335<br /> Used 194636 (81.66%)<br /> Reserved 32 (0.01%)<br /> Free 43667 (18.32%)<br />Space information<br /> No. Inodes 35665<br /> Total size 1.8T (1999298887680 bytes)<br /> Used 1.5T (81.66%)<br /> Reserved 256M (0.01%)<br /> Free 341G (18.32%)<br />PFS information<br /> PFS# Mode Snaps<br /> 0 MASTER 0 (root PFS)<br /><br />Volume identification<br /> Label hammer1_secure_slave<br /> No. Volumes 1<br /> HAMMER Volumes /dev/mapper/knox2<br /> Root Volume /dev/mapper/knox2<br /> FSID 0198767f-7139-11e8-9608-6d626d258b95<br /> HAMMER Version 7<br />Big-block information<br /> Total 238335<br /> Used 191903 (80.52%)<br /> Reserved 32 (0.01%)<br /> Free 46400 (19.47%)<br />Space information<br /> No. Inodes 35668<br /> Total size 1.8T (1999298887680 bytes)<br /> Used 1.5T (80.52%)<br /> Reserved 256M (0.01%)<br /> Free 363G (19.47%)<br />PFS information<br /> PFS# Mode Snaps<br /> 0 MASTER 0 (root PFS)<br /> 1 SLAVE 3<br /><br />Is that something I should be worried about too ? As far as I can tell the replication of new files from master and slave works great, I can see the new files on the slave PFS fairly quickly.<br /><br />Wishing you all a good day,<br /><br />Laurent</p>
</body>
</html>