<div dir="ltr"><div>It's really hard to say from something which is virtually hosted. It kinda sounds like the virtual host isn't assigning enough of its own cpus to the virtual host. The fact that DragonFly is complaining about smp_invltlb() implies that the host's virtualized cpu threads are not getting scheduled properly.<br><br></div><div>One thing to note is that we do not do any instruction escapes to hint to virtual hosts when a cpu is in a tight loop waiting for synchronization. It would be nice if we had some support for that, it would probably make DFly play better on virtualized systems.<br></div><div><br></div><div>I suggest setting the number of cores to 1. That will get rid of all SMP interplay and hopefully remove the issues the virtual host is choking on.<br></div><div><br></div>-Matt<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, May 26, 2016 at 5:43 AM, Stefan Unterweger <span dir="ltr"><<a href="mailto:232.20711@chiffre.aleturo.com" target="_blank">232.20711@chiffre.aleturo.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hello,<br>
<br>
I have tried debugging this on my own, but I am out of ideas now. I<br>
have a server running Dragonfly BSD, not yet really in production<br>
because it crashes semi-randomly every few weeks. The first few crashes<br>
were just freezes: The machine would become utterly unresponsive and<br>
would need a hard reboot.<br>
<br>
Since a few crashes back, kernel traces have started to appear in syslog<br>
moments before the server freezes. I never get a downright kernel panic<br>
on the console, it just hangs until put out of its misery and rebooted.<br>
I have included the last three traces below, as I have started to record<br>
them to do some investigations myself.<br>
<br>
It -appears- to crash when under heavy I/O load when paging is involved,<br>
but this is not always the case (frozen ’top‘ from moments before the<br>
crash sometimes have a huge swath of pageout, sometimes nil). The<br>
crashes are not reproduceable, which makes it even harder to debug<br>
(also, even though the server is not yet fully in production, it already<br>
hosts a variety of stuff where I cannot afford large downtimes just to<br>
poke around).<br>
<br>
It is not a real machine, but hosted in a virtual datacenter hosted by<br>
ProfitBricks, a german company—perhaps somebody has heard of them. From<br>
what I know, they are running some sort of KVM for their virtualisation.<br>
<br>
Increasing the number of cores seemed to help insofar as I now quite<br>
reliably get a kernel trace in syslog right before the machine dies.<br>
Perhaps the one core senses the problem, cries out in panic and is then<br>
brought down as the other one pulls it down into its own endless loop,<br>
so that they may perish together.<br>
<br>
<br>
<br>
I am trying to set up another ‘real’ machine as a pseudo-server locally<br>
in my office to have something to fiddle with, but so far I have not<br>
managed to get anywhere near a crash, much less a similar one. So it<br>
may well be that it’s the virtualisation that is to blame—the evidence<br>
is inconclusive, I cannot rule it out.<br>
<br>
My -guess- is—and the lines starting with ‘endless loop…’ in the loop<br>
seem to support it—is some sort of race condition that is triggered and<br>
sends the server into an endless loop. The syslog lines starting with<br>
‘endless loop’ support this at least superficially.<br>
<br>
Interestingly, the traces look wildly different. At first I thought I<br>
was running into some HAMMER bugs or whatnot—Dragonfly is clearly marked<br>
as ‘experimental’, so it’s my own fault if the floor is full with my own<br>
blood from running bleeding-edge software. :o)<br>
<br>
But from the traces, it rather looks like HAMMER is just an innocent<br>
bystander. One of the traces was in some HAMMER function, another in<br>
the middle of paging, the last one in who knows what. So, basically,<br>
the server was busy minding his own business and crashed at random, with<br>
the trace just recording that the server was doing I/O and light<br>
paging—i.e., was just being a -server-.<br>
<br>
<br>
<br>
As promised, here are the traces of the last three incidends, with the<br>
most current one on top. All of them are from /var/log/messages; I have<br>
redacted the timestamps and such to help fit it on one screen-width.<br>
The console shows pretty much the same thing, but -no- panic and just<br>
hangs in all three cases.<br>
<br>
<br>
This was from an hour ago:<br>
| Trace beginning at frame 0xffffffe07f458440<br>
| smp_invltlb() at smp_invltlb+0x229 0xffffffff80a19cd5<br>
| smp_invltlb() at smp_invltlb+0x229 0xffffffff80a19cd5<br>
| pmap_qenter() at pmap_qenter+0x6d 0xffffffff80a129da<br>
| allocbuf() at allocbuf+0x5eb 0xffffffff8065a4cb<br>
| getblk() at getblk+0x484 0xffffffff8065d641<br>
| hammer_io_new() at hammer_io_new+0x34 0xffffffff8080ef56<br>
| hammer_load_buffer() at hammer_load_buffer+0x72 0xffffffff8081ab92<br>
| hammer_get_buffer() at hammer_get_buffer+0x4b1 0xffffffff8081b114<br>
| hammer_bnew() at hammer_bnew+0xa6 0xffffffff8081bde3<br>
| hammer_generate_undo() at hammer_generate_undo+0x114 0xffffffff808242ee<br>
| hammer_modify_buffer() at hammer_modify_buffer+0xb4 0xffffffff8080f3f3<br>
| hammer_btree_do_propagation() at hammer_btree_do_propagation+0x17b 0xffffffff807feaae<br>
| hammer_ip_sync_record_cursor() at hammer_ip_sync_record_cursor+0x5f7 0xffffffff80816ed8<br>
| hammer_sync_record_callback() at hammer_sync_record_callback+0x24d 0xffffffff80808995<br>
| hammer_rec_rb_tree_RB_SCAN() at hammer_rec_rb_tree_RB_SCAN+0xfa 0xffffffff80813cc0<br>
| hammer_sync_inode() at hammer_sync_inode+0x27e 0xffffffff8080b0ad<br>
| hammer_flusher_flush_inode() at hammer_flusher_flush_inode+0x55 0xffffffff808074bd<br>
| hammer_fls_rb_tree_RB_SCAN() at hammer_fls_rb_tree_RB_SCAN+0xfc 0xffffffff8080662f<br>
| hammer_flusher_slave_thread() at hammer_flusher_slave_thread+0x7a 0xffffffff80806763<br>
| smp_invltlb: endless loop 00000000 00000002, rflags 0000000000000282 retrysmp_invltlb: ipi sent<br>
<br>
A few weeks back. The server mysteriously did -not- crash on this one,<br>
it merely froze for half a minute or so and recovered:<br>
| Trace beginning at frame 0xffffffe05c3136f0<br>
| smp_invltlb() at smp_invltlb+0x229 0xffffffff80a19cd5<br>
| smp_invltlb() at smp_invltlb+0x229 0xffffffff80a19cd5<br>
| pmap_qenter() at pmap_qenter+0x6d 0xffffffff80a129da<br>
| swap_pager_putpages() at swap_pager_putpages+0x453 0xffffffff8083836a<br>
| vm_pageout_flush() at vm_pageout_flush+0x120 0xffffffff80851cc2<br>
| vm_pageout_thread() at vm_pageout_thread+0x150a 0xffffffff80853379<br>
| smp_invltlb: endless loop 00000000 00000002, rflags 0000000000000286 retrysmp_invltlb: ipi sent<br>
<br>
The oldes one from about a month ago:<br>
| Trace beginning at frame 0xffffffe0b43c74b8<br>
| smp_invltlb() at smp_invltlb+0x229 0xffffffff80a19cd5<br>
| smp_invltlb() at smp_invltlb+0x229 0xffffffff80a19cd5<br>
| pmap_qremove() at pmap_qremove+0x53 0xffffffff80a12a2f<br>
| vfs_vmio_release() at vfs_vmio_release+0x142 0xffffffff80657f5b<br>
| getnewbuf() at getnewbuf+0x57d 0xffffffff8065cbd9<br>
| getblk() at getblk+0x3b1 0xffffffff8065d56e<br>
| cluster_readx() at cluster_readx+0x34b 0xffffffff806665fa<br>
| hammer_vop_read() at hammer_vop_read+0x1cf 0xffffffff8082b0fe<br>
| vop_read() at vop_read+0x85 0xffffffff80682a3f<br>
| vn_read() at vn_read+0x1d0 0xffffffff80681c10<br>
| kern_preadv() at kern_preadv+0x171 0xffffffff80630abf<br>
| sys_read() at sys_read+0x66 0xffffffff80630c11<br>
| syscall2() at syscall2+0x412 0xffffffff809e9e8c<br>
| Xfast_syscall() at Xfast_syscall+0xcb 0xffffffff809d28db<br>
| smp_invltlb: endless loop 00000000 00000002, rflags 0000000000000292 retrysmp_invltlb: ipi sent<br>
<br>
The only thing in common is the last line, with the exception of the<br>
‘rflags’ that have a slightly different value. I can’t make sense out<br>
of it—perhaps someone more experienced can help me shed some light on<br>
what is happening here?<br>
<span class="HOEnZb"><font color="#888888"><br>
<br>
--<br>
If you can do it then why do it?<br>
</font></span></blockquote></div><br></div>