hammer-inodes: malloc limit exceeded
Matthew Dillon
dillon at apollo.backplane.com
Sun Aug 31 15:13:37 PDT 2008
:This is 2.0 + patches (current DragonFly_RELEASE_2_0_Slip)
:
:...
:> And tell me what it says. If the value is greater then 70000, set it
:> to 70000.
:
:There is no such sysctl. I used kern.maxvnodes instead; the original value
:was 129055.
:
:--
:Francois Tigeot
Yah, I mistyped that. its kern.maxvnodes. The fix I had made was
MFC'd to 2.0_Slip so the calculation must still be off. Reducing
maxvnodes should solve the panic. The basic problem is that HAMMER's
struct hammer_inode is larger then struct vnode so the vnode limit
calculations wind up being off.
You don't need to use the hardlink trick if backing up to a HAMMER
filesystem. I still need to write utility support to streamline
the user interface but basically all you have to do is use rdist, rsync,
or cpdup (without the hardlink trick) to overwrite the same destination
directory on the HAMMER backup system, then generate a snapshot
softlink. Repeat each day.
This is how I backup DragonFly systems. I have all the systems
NFS-exported to the backup system and it uses cpdup and the hammer
snapshot feature to create a softlink for each day.
backup# df -g -i /backup
Filesystem 1G-blocks Used Avail Capacity iused ifree %iused Mounted on
TEST 696 281 414 40% 3605109 0 100% /backup
backup# cd /backup/mirrors
backup# ls -la
...
drwxr-xr-x 1 root wheel 0 Aug 31 03:20 pkgbox
lrwxr-xr-x 1 root wheel 26 Jul 14 22:22 pkgbox.20080714 -> pkgbox@@0x00000001061a92cd
lrwxr-xr-x 1 root wheel 26 Jul 16 01:58 pkgbox.20080716 -> pkgbox@@0x000000010c351e83
lrwxr-xr-x 1 root wheel 26 Jul 17 03:08 pkgbox.20080717 -> pkgbox@@0x000000010d9ee6ad
lrwxr-xr-x 1 root wheel 26 Jul 18 03:12 pkgbox.20080718 -> pkgbox@@0x000000010f78313d
lrwxr-xr-x 1 root wheel 26 Jul 19 03:25 pkgbox.20080719 -> pkgbox@@0x0000000112505014
...
Doing backups this way has some minor management issues, and we really
need an official user utility to address them. When the backup disk
gets over 90% full I will have to start deleting softlinks and running
hammer prune, and I run about 30 minutes worth of hammer reblocking
ops every night from cron.
HAMMER locks-down atime/mtime when accessed via a snapshot so tar | md5
can be used to create a sanity check for each snapshot.
By my estimation it is going to take at least another 200+ days of
daily backups before I get to that point on my /backup system. I
may speed it up by creating some filler files so I can write and test
a user utility to do the management.
--
Another way of doing backups is to use the mirroring feature. This only
works when both the source and target filesystems are HAMMER filesystems
though, and the snapshot softlink would have to be created manually
(so we need more utility support to make it easier for userland to do).
-Matt
Matthew Dillon
<dillon at backplane.com>
More information about the Bugs
mailing list