panic: assertion: layer2->zone == zone in hammer_blockmap_free

YONETANI Tomokazu qhwt+dfly at
Sat Aug 2 19:53:56 PDT 2008

On Sat, Aug 02, 2008 at 12:24:39PM -0700, Matthew Dillon wrote:
> :Hi.
> :Caught this panic on latest DragonFly_RELEASE_2_0 (minus the fix to
> :vfs_subr.c, but the machine has less than 2G RAM), when I tried to
> :destroy a PFS slave on HAMMER filesystem.  The PFS hasn't been actively
> :..
> :
> :The kernel dump has been uploaded as ~y0netan1/crash/{kernel,vmcore}.13
> :on my leaf account.
> :
> :Cheers.

Matt, you sent me two other messages privately, but I think this message
covers what you asked me in them.  apollo doesn't like my IP address, so
I need to configure my mail to go through ISP to do so (and I haven't, yet).

$ ls -l /HAMMER
total 0
lrwxr-xr-x  1 root    wheel   26 Jul 19 15:09 obj -> @@0xffffffffffffffff:00001
lrwxr-xr-x  1 root    wheel   26 Jul 19 16:24 slave -> @@0xffffffffffffffff:00002
drwxr-xr-x  1 source  source   0 Jul 24 11:48 source

/HAMMER is the only HAMMER filesystem on this machine and is mounted without
nohistory flags.

`obj` is used as objdir for build{kernel,world} through nullfs, and has
nouhistory flag set (it used to have noshistory too, but setting noshistory
confused a few commands running as non-root, so I cleared that flag later).
`source' is a plain directory under HAMMER filesystem and contains NetBSD
CVS repository, a pkgsrc git tree cpdupped from a UFS filesystem(unused
after cpdup), and a pkgsrc git tree updated once a day using togit script
(from `fromcvs' package).
It experienced two types of major crashes until now: the first one was
triggered by an attempt of cross-device link in the middle of July.
The other was triggered by network code (reused socket on connect).
According to /var/log/messages, the recovery was run only once, though.

  Jul 19 11:34:52 firebolt kernel: HAMMER(HAMMER) Start Recovery 30000000002c7350 - 30000000002c93f0 (8352 bytes of UNDO)(RW)
  Jul 19 11:34:53 firebolt kernel: HAMMER(HAMMER) End Recovery

>     Also:
>     It looks like a 1TB disk, mostly empty, with ~500K inodes.
>     * Did you ever fill it up?  (I'm guessing you haven't)


>     * What types of operations did you run on it other then as a mirroring
>       slave?  e.g. cpdup, tar, buildworlds, etc?

The `slave' was a plain pfs-slave, so I didn't do anything to it before
this panic.

>     * Did you upgrade it to a master before getting the panic or after?


>     * How many PFS operations did you run on the filesystem approximately?  

More than several to less than twenty, I guess.  Mostly creating and
destroying PFS's without actual mirror-copy or some other file operations.

>     * How much reblocking have you done overall?

Several times.

>     * You said you were playing with mirroring, with it as a slave.  Did
>       you mirror-copy or mirror-write into it?  If so, how much data was
>       transfered?

I use mirror-copy to sync the slave.  ${.OBJDIR} for buildworld usually
grows upto 2Gbytes, and ${WRKDIR}s for pkgsrc can reach around 1Gbytes
if I build a meta-package.  Usually mirror-copy after buildwold or building
packages, remove the directories in master, then mirror-copy again to
see if removing files or directories are properly propagated to slave.

I remember interrupting reblock on /HAMMER/obj, but I haven't done
mirror-copy to slave after that, so I don't think it's something to do
with it.

>     * Are the ~500K inodes mostly associated with the slave or unrelated?

They are mostly assosiated to /HAMMER/source and /HAMMER/obj.


More information about the Bugs mailing list