[issue1583] panic: assertion: cursor->trans->sync_lock_refs > 0 in hammer_recover_cursor
Joe "Floid" Kanowitz (via DragonFly issue tracker)
sinknull at leaf.dragonflybsd.org
Sat Jan 30 10:11:38 PST 2010
Joe "Floid" Kanowitz <jkanowitz at snet.net> added the comment:
Contents of patch applied to 2.4, no ill effects noted. With the PFSes
mostly-synchronized I didn't have enough data left to sync to test for the same
failure with multiple `mirror-copy`s running, but I'll be setting up parallel
mirror-streams shortly as required for my setup. (Is there a convenient way to
destroy a PFS and make sure it gets recreated with the same PFS number?)
I briefly thought the low I/O figures on da1 (mirroring from da0 to da1) might
have been a new quirk:
# hammer mirror-copy /DATA/ /Mirror/pfs/DATA
histogram range 0000000107e975cf - 000000010d6514f0
Mirror-read: Mirror from 0000000000000002 to 000000010d6514f0
Mirror-read /DATA/ succeeded
> iostat -w1
tty ad0 da0 da1 cpu
tin tout KB/t tps MB/s KB/t tps MB/s KB/t tps MB/s us ni sy in id
2 465 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 4 0 10 1 85
0 80 0.00 0 0.00 64.00 643 40.21 64.00 4 0.25 21 0 42 2 35
0 80 0.00 0 0.00 63.82 683 42.59 60.00 4 0.23 28 0 37 3 32
0 80 0.00 0 0.00 63.79 614 38.26 64.00 6 0.37 20 0 38 1 41
. ..but it completed in appropriate time and all the data appears to be there, so
I'm not sure if HAMMER was being smart enough to skip over what existed from a
previous interrupted copy or what. I don't really remember whether the
pre-patch stats actually looked any different for the destination disk.
[SATA disks on a mpt0, write caches enabled.]
_____________________________________________________
DragonFly issue tracker <bugs at lists.dragonflybsd.org>
<http://bugs.dragonflybsd.org/issue1583>
_____________________________________________________
More information about the Bugs
mailing list