<!DOCTYPE html>
<html><head>
<meta charset="UTF-8">
</head><body><p>That is an interesting observation. What kinds of problems? I think I saw weird</p><p>free vs used output in df too, wonder if that would have still happened with</p><p>softdeps disabled.</p><p><br></p><p>I think UFS support is super important for installations with, say, a small internal</p><p>system disk and big dedicated HAMMER ones. My initial draw to DragonFly was</p><p>this sort of configuration, since ZFS is ridiculously complicated and the FreeNAS</p><p>people seem to have bizarre priorities. Would it make sense to disable softdeps</p><p>by default?</p><p><br></p><p>Back to the original problem, I used git bisect but it wound up pointing me at some</p><p>seemingly innocuous libc mktemp change, and the nearest kernel change was</p><p>to networking, back on 11/04. I was going to give it another go with the faster block</p><p>device and hopefully get better testing results, but I have not found the time yet. </p><p><br></p><blockquote type="cite">On February 11, 2019 at 10:28 AM John Marino <<a href="mailto:dragonflybsd@marino.st">dragonflybsd@marino.st</a>> wrote:<br><br><br>[resend because I used wrong address]<br>[sent before response below]<br>FWIW, for me softdeps has always been buggy. The issues only resolve <br>when it's turned off. as far as I am concerned UFS with softdeps on DF <br>has always been unusable. Sooner or later (and usually sooner) it acts <br>ups.<br><br><br>On 2/11/2019 11:18, Eric Melville wrote:<blockquote type="cite">Oh I hear you, and that was definitely where I pointed my finger at<br>first. But I will</blockquote><blockquote type="cite">keep digging and see where the evidence takes me for now.</blockquote>><blockquote type="cite">As an update, 5.2 with softdeps seems fine. 5.4 without softdeps, also<br>fine. So</blockquote><blockquote type="cite">potentially a softdeps bug introduced between the two releases. I am<br>churning</blockquote><blockquote type="cite">through a git bisect now and will hopefully have more solid data in the<br>next day</blockquote><blockquote type="cite">or so.</blockquote>><br>>> On February 11, 2019 at 9:07 AM Matthew Dillon <<a href="mailto:dillon@backplane.com">dillon@backplane.com</a>><br>>> wrote:<br>>><br>>> This feels more like an issue with the I/O and not with UFS<br>>> specifically. But since you tried two different storage devices it<br>>> couldn't be that. Perhaps there is a power or overheat issue on the<br>>> system.<br>>><br>>> -Matt<br>>><br>>> On Sun, Feb 10, 2019 at 1:42 PM Eric Melville < <a href="mailto:eric@rigelfore.com">eric@rigelfore.com</a><br>>> <mailto:<a href="mailto:eric@rigelfore.com">eric@rigelfore.com</a>>> wrote:<br>>><br>>> __<br>>><br>>> Hello there,<br>>><br>>><br>>><br>>> After installing 5.4 my system has been getting stuck in UFS,<br>>> apparently in softdeps.<br>>><br>>> At first I was faulting the -j12 buildworld, but then saw it in<br>>> lower parallel counts, and<br>>><br>>> eventually saw it when looping buildworld with no -j option at<br>>> all. Then I was faulting<br>>><br>>> my fast new NVME but eventually factored that out too by changing<br>>> back to an old<br>>><br>>> hard drive. In any case, the faster the hardware and the more work<br>>> running, the<br>>><br>>> more quickly and easily this seems to reproduce.<br>>><br>>><br>>> Typically during the phase that removes old output, the build will<br>>> hang indefinitely.<br>>><br>>> Some processes continue to run but new ones never get going, and<br>>> the old world<br>>><br>>> clean never makes any progress. For example ssh to the host in<br>>> this state would<br>>><br>>> succeed to connect and authenticate, but the new shell never seems<br>>> to run.<br>>><br>>><br>>> I suppose I should try disabling softdeps next.<br>>></blockquote></body></html>