slave PFS best practices

Predrag Punosevac punosevac72 at gmail.com
Sun Mar 13 09:07:19 PDT 2016


I was wondering if seasoned HAMMER users could let me know if I am doing
something outragesly stupid

I have configured a pair of 3 TB HDD SATA drives as my storage devices
as shown in my /etc/fstab


/dev/serno/WD-WCAWZ2111282.s1a  /data   hammer  rw	2	2
/dev/serno/WD-WCAWZ2969584.s1a  /backup hammer  rw      2	2

/data/pfs/backups /data/backups         null    rw      0       0

and created MASTER /data/pfs/backups pfs which will be receiving rsync
backups from my various house devices and mounted as seen above

dfly# hammer pfs-status /data/backups
/data/backups   PFS #1 {
    sync-beg-tid=0x0000000000000001
    sync-end-tid=0x0000000100028430
    shared-uuid=a22e2639-e8e7-11e5-90ee-b9aeed3cce35
    unique-uuid=a22e268b-e8e7-11e5-90ee-b9aeed3cce35
    label=""
    snapshots="/data/hammer/backups"
    prune-min=00:00:00
    operating as a MASTER
}


I changed the default location of snapshots from /var/hammer/<pfs> which
is on 32 GB SSD to a directory on the drive itself. Is this out of line?
I also adjusted few parameters 

dfly# hammer viconfig /data/backups
snapshots 1d 60d
prune     1d 50m
rebalance 1d 50m
#dedup     1d 50m
reblock   1d 50m
recopy    30d 100m

as I am using Celeron embedded motherboard for this project which
doesn't have much mussle. I was not Goofing off at this time with

/etc/defaults/periodic.conf

and just left defaults

I created the correspoinding slave PFS on the second drive mounted as
/backup but not mounted

dfly# hammer info /backup
Volume identification
        Label               BACKUP
        No. Volumes         1
        FSID                ce6f126e-e8e5-11e5-bd5c-b9aeed3cce35
        HAMMER Version      6
Big-block information
        Total          357432
        Used                3 (0.00%)
        Reserved           45 (0.01%)
        Free           357384 (99.99%)
Space information
        No. Inodes          9
        Total size       2.7T (2998356934656 bytes)
        Used              24M (0.00%)
        Reserved         360M (0.01%)
        Free             2.7T (99.99%)
PFS information
        PFS ID  Mode    Snaps  Mounted on
             0  MASTER      1  /backup
             1  SLAVE       1  not mounted

I adjusted the snapshot directory for the slave 

dfly# hammer pfs-status /backup/backups
      
/backup/backups PFS #1 {
    sync-beg-tid=0x0000000000000001
    sync-end-tid=0x0000000100028430
    shared-uuid=a22e2639-e8e7-11e5-90ee-b9aeed3cce35
    unique-uuid=109e7319-e8eb-11e5-90ee-b9aeed3cce35
    label=""
    snapshots="/backup/hammer/backups"
    prune-min=00:00:00
    operating as a SLAVE
}

However I have not adjusted 

hammer viconfig /backup/backups
# No configuration present, here are some defaults
# you can uncomment.  Also remove these instructions
#
#snapshots 1d 60d
#prune     1d 5m
#rebalance 1d 5m
#dedup     1d 5m
#reblock   1d 5m
#recopy    30d 10m


as my understanding is that 160.clean-hammer periodic script only deal
with mounted pfs

# 160.clean-hammer
daily_clean_hammer_enable="YES"    
maintenance
daily_clean_hammer_verbose="NO"    
daily_clean_hammer_pfslist=""          # default: mounted pfs


I am really not sure about the last two steps. While not mounting slave
PFS seems logical since mirror-copy is read only anyway I am not sure if
I should edit hammer viconfig /backup/backups. On one hand that would
seems logical considering data integrity but on another hand if the
daily script is not performing maintenance on it why bother. How do I
ensure that 3 months down the line when first HDD which hosts master PFS
dies its slave mirror on the second HDD is in good shape and can be
promoted to the master?


Best,
Predrag



More information about the Users mailing list