hammer-inodes: malloc limit exceeded

Freddie Cash fjwcash at gmail.com
Sun Aug 31 16:54:55 PDT 2008


On Sun, Aug 31, 2008 at 3:12 PM, Matthew Dillon
<dillon at apollo.backplane.com> wrote:
>    You don't need to use the hardlink trick if backing up to a HAMMER
>    filesystem.  I still need to write utility support to streamline
>    the user interface but basically all you have to do is use rdist, rsync,
>    or cpdup (without the hardlink trick) to overwrite the same destination
>    directory on the HAMMER backup system, then generate a snapshot
>    softlink.  Repeat each day.

In-filesystem snapshot support is such a handy tool.  It's something
that I really miss on our Linux systems (LVM snapshots are separate
volumes, and you have to guesstimate how much room each one will use,
and you have to leave empty space in your volume group to support
them).

We use a similar setup for our remote backups box at work.  It's a 2x
dual-core Opteron system with 8 GB of RAM and 12x 400 GB SATA HDs on
one 3Ware controller and 12x 500 GB SATA HDs on a second 3Ware
controller (all configured as Single Disks), running FreeBSD 7-STABLE
off a pair of 2 GB CompactFlash cards (gmirror'd).  / is on the CF,
everything else (/usr, /usr/ports, /usr/local, /usr/ports/distfiles,
/usr/src, /usr/obj, /home, /tmp, /var, /storage) are ZFS filesystems
(the 24 drives are configured as a single raidz2 pool).  There's a
quad-port Intel Pro/1000 gigabit NIC configured via lagg(4) as a
single load-balancing interface.

Every night, a cronjob creates a snapshot of /storage, then the server
connects to the remote servers via SSH, runs rsync against the entire
harddrive and a directory under /storage.  For 37 servers, it takes
just under 2 hours for the rsync runs (the initial rsync can takes
upwards of 12 hours per server, depending on the amount of data that
needs to be transferred).  A normal snapshot uses <4 GB.






More information about the Bugs mailing list