sshguard and hammer
Predrag Punosevac
punosevac72 at gmail.com
Thu Jun 26 20:45:49 PDT 2014
I was attempting to use sshguard in the same fashion I do that on
OpenBSD by adding
table <sshguard> persist
block in quick on egress proto tcp from <sshguard> \
to any port ssh label "ssh bruteforce"
into pf.conf and starting sshguard daemon by putting
echo 'sshguard_enable="YES"' >> /etc/rc.conf.local
It turns out I almost killed server by doing this. Namely I had
/var/log/messages
full of the following
backup1# tail messages
Jun 26 23:24:05 backup1 sshguard[755]: Reloading rotated file
/var/log/auth.log.
Jun 26 23:24:05 backup1 sshguard[755]: Reloading rotated file
/var/log/maillog.
Jun 26 23:24:05 backup1 sshguard[755]: Reloading rotated file
/var/log/auth.log.
Jun 26 23:24:05 backup1 sshguard[755]: Reloading rotated file
/var/log/auth.log.
Jun 26 23:24:05 backup1 sshguard[755]: Reloading rotated file
/var/log/maillog.
Jun 26 23:24:05 backup1 sshguard[755]: Reloading rotated file
/var/log/auth.log.
and similarly
/var/log/auth.log
Jun 26 23:22:00 backup1 sshguard[755]: Reloading rotated file
/var/log/auth.log.
Jun 26 23:22:00 backup1 sshguard[755]: Reloading rotated file
/var/log/maillog.
Jun 26 23:22:00 backup1 sshguard[755]: Reloading rotated file
/var/log/auth.log.
Jun 26 23:22:00 backup1 sshguard[755]: Reloading rotated file
/var/log/maillog.
Jun 26 23:22:00 backup1 sshguard[755]: Reloading rotated file
/var/log/auth.log
That quickly grow to about 1.7 GB in size if I can trust du with HAMMER
(I learned hard way that it doesn't work with ZFS) but due to the
frequency of the file changes
backup1# df -h
Filesystem Size Used Avail Capacity Mounted on
ROOT 298G 58G 240G 19% /
devfs 1.0K 1.0K 0B 100% /dev
/dev/da0s1a 756M 112M 584M 16% /boot
/pfs/@@-1:00001 298G 58G 240G 19% /var
/pfs/@@-1:00002 298G 58G 240G 19% /tmp
/pfs/@@-1:00003 298G 58G 240G 19% /usr
/pfs/@@-1:00004 298G 58G 240G 19% /home
/pfs/@@-1:00005 298G 58G 240G 19% /usr/obj
/pfs/@@-1:00006 298G 58G 240G 19% /var/crash
/pfs/@@-1:00007 298G 58G 240G 19% /var/tmp
procfs 4.0K 4.0K 0B 100% /proc
ROOT 3.3T 44G 3.2T 1% /data
Obviously I have not really used 58G of data. How do I "scrab" and clean
history and more importantly what would be a good way to prevent this
kind of disaster? I can leave without sshguard but I am really mortified
in filling up disk space.
Most Kind Regards,
Predrag
More information about the Users
mailing list