[issue884] Performance/memory problems under filesystem IO load

Nicolas Thery nthery at gmail.com
Wed Dec 19 01:19:15 PST 2007


2007/12/14, Hasso Tepper <sinknull at crater.dragonflybsd.org>:
> All tests are done with this command:
> dd if=/dev/zero of=./file bs=4096k count=1000
[...]
> There is one more strange thing in running these tests. I looked at memory
> stats in top before and after running dd.
>
> Before:
> Mem: 42M Active, 40M Inact, 95M Wired, 304K Cache, 53M Buf, 795M Free
> After:
> Mem: 70M Active, 679M Inact, 175M Wired, 47M Cache, 109M Buf, 1752K Free

FWIW, I observe similar figures.  I also noticed that deleting ./file
and waiting a bit restores memory to the "before" state.

The size increase in the wired pool can be reproduced more simply with:

sysctl vm.stats.vm.v_wire_count                          # A
dd if=/dev/zero of=./file bs=4096k count=1
sysctl vm.stats.vm.v_wire_count                          # B
rm ./file
sysctl vm.stats.vm.v_wire_count                          # C

A == C && B == A + 1

I traced this with gdb.  The additional wired page is part of a struct
buf (b_xio) instance tied to the ./file vnode.   I reckon this vnode
stays cached (namecache?) when the dd process ends and deleting ./file
forces destruction of the vnode.

AFAIU wired pages can not be reclaimed by the pager when memory is
low.  So is it normal to keep b_xio pages wired when they are "just"
cached in a vnode (i.e. no ongoing r/w operation)?





More information about the Bugs mailing list