[DragonFlyBSD - Bug #884] (In Progress) Performance/memory problems under filesystem IO load
bugtracker-admin at leaf.dragonflybsd.org
bugtracker-admin at leaf.dragonflybsd.org
Tue Feb 18 06:47:13 PST 2014
Issue #884 has been updated by tuxillo.
Description updated
Category set to VM subsystem
Status changed from New to In Progress
Assignee deleted (0)
Target version set to 3.8.0
Hi,
I've done the same test under a vkernel:
# sysctl vm.stats.vm.v_wire_count
vm.stats.vm.v_wire_count: 11486
# dd if=/dev/zero of=./file bs=4m count=1
1+0 records in
1+0 records out
4194304 bytes transferred in 0.011742 secs (357201747 bytes/sec)
# sysctl vm.stats.vm.v_wire_count
vm.stats.vm.v_wire_count: 11675
# rm file
# sysctl vm.stats.vm.v_wire_count
vm.stats.vm.v_wire_count: 10647
And the same test on real hardware:
antonioh at nas:~$ sysctl vm.stats.vm.v_wire_count
vm.stats.vm.v_wire_count: 379492
antonioh at nas:~$ dd if=/dev/zero of=./file bs=4m count=1
1+0 records in
1+0 records out
4194304 bytes transferred in 0.035698 secs (117494297 bytes/sec)
antonioh at nas:~$ sysctl vm.stats.vm.v_wire_count
vm.stats.vm.v_wire_count: 379500
antonioh at nas:~$ rm file
antonioh at nas:~$ sysctl vm.stats.vm.v_wire_count
vm.stats.vm.v_wire_count: 378476
I don't see the high usage corecode showed in his test.
Matt, there was a ton of work in the VM subsystem, is it possible that this is not the case anymore?
Best regards,
Antonio Huete
----------------------------------------
Bug #884: Performance/memory problems under filesystem IO load
http://bugs.dragonflybsd.org/issues/884#change-11761
* Author: hasso
* Status: In Progress
* Priority: High
* Assignee:
* Category: VM subsystem
* Target version: 3.8.0
----------------------------------------
During testing drive with dd I noticed that there are serious performance
problems. Programs which need disk access, block for 10 and more seconds.
Sometimes they don't continue the work until dd is finished. Raw disk access
(ie not writing to file, but directly to the disk) is reported to be OK (I
can't test it myself).
All tests are done with this command:
dd if=/dev/zero of=./file bs=4096k count=1000
Syncing after each dd helps to reproduce it more reliably (cache?).
There is one more strange thing in running these tests. I looked at memory
stats in top before and after running dd.
Before:
Mem: 42M Active, 40M Inact, 95M Wired, 304K Cache, 53M Buf, 795M Free
After:
Mem: 70M Active, 679M Inact, 175M Wired, 47M Cache, 109M Buf, 1752K Free
And as a side effect - I can't get my network interfaces up any more after
running dd - "em0: Could not setup receive strucutres".
--
You have received this notification because you have either subscribed to it, or are involved in it.
To change your notification preferences, please click here: http://bugs.dragonflybsd.org/my/account
More information about the Bugs
mailing list