panic: assertion: error == 0 in hammer_start_transaction

Rumko rumcic at gmail.com
Sat Jan 3 16:18:20 PST 2009


Simon 'corecode' Schubert wrote:

> Rumko wrote:
>> Matthew Dillon wrote:
>> 
>>> :...
>>> :few "RPC timeout for server 192.168.0.16" before the network starts
>>> :working and goes on booting, but if I run the vkernel under gdb, I only
>>> :get perhaps 2 of those messages and after that nothing happens - gets
>>> :stuck) which was semi-diskless (root on nfs, and one hammer fs partition
>>> :on the first vkd, but since it was stuck at the RPC timeout messages it
>>> :shouldn't have gotten far enough to mount the root, let alone the local
>>> :hammer partition - unless it started booting sometime during the night).
>>> :
>>> :The backtrace:
>>> :panic: assertion: error == 0 in hammer_start_transaction
>>> :mp_lock = 00000000; cpuid = 0
>>> :Trace beginning at frame 0xe28c9968
>>> :panic(e28c998c,c02c6806,e28c9a84,c39a6738,e28c99a8) at panic+0x14d
>>> :...
>>> :The dump is located at leaf:~rumko/crash/{kernel,vmcore}.0
>>> :
>>> :The kernel was compiled on the 2nd January around noon CET ... so the
>>> :sources should have been from around then as well.
>>> :--
>>> :Regards,
>>> :Rumko
>>>
>>>     Looking at the core the error code was 6, ENXIO, which implies
>>>     the underlying block device to the HAMMER filesystem went away.
>>>
>>>     It looks like a HAMMER mount on /mnt, backed by a VN device
>>>     (/dev/vn0s1a):
>>>
>>>     f_mntonname = "/mnt", '\0' <repeats 75 times>,
>>>     f_mntfromname = "VROOT", '\0' <repeats 74 times>,
>>>
>>>     vol_name = 0xe1c84080 "/dev/vn0s1a",
>>>
>>>     My guess is that your VN device is backed by a file over NFS
>>>     and NFS errored out.
>>>
>>> -Matt
>>> Matthew Dillon
>>> <dillon at backplane.com>
>> 
>> Ah damn. In that case nevermind, I wonder what I was doing, hm.
> 
> Still shouldn't panic or something, no?
> 
> cheers
>    simon
> 

Well it would be lovely if it wouldn't panic, but at least I have a faint idea
what caused it and will be more careful in the future.
-- 
Regards,
Rumko





More information about the Bugs mailing list