em driver - issue #2
EM1897 at aol.com
EM1897 at aol.com
Mon Feb 7 11:21:27 PST 2005
>After reading this I realized that you are right about the reason that the
>memory fails is that the box is interrupt bound (which is just what I was
> trying to achieve when I started this test). I didnt choose 145Kpps by
>accident; I was trying to find a point at which the machine would livelock,
>to compare it to freebsd (since top wasn't working). Usually I fire about
>30Kpps (which is typical load on a busy 100Mb/s network) and see what
>pct of system resources is being used to index the performance of the box.
>145K would be more than this particular box can handle. A faster box can
>easily FORWARD 300K pps, so its not the raw number, but the box's
>capability. I hadn't considered that I'm working with a 32bit bus on this
>system.
>
>Lowering the test to 95Kpps, dragonfly handled it without any problems. So
>I'd say that the failure to get mbuf clusters is a function of the system
>being
>perpetually overloaded. However the elegance in which a system handles an
>overload condition is important. The fact that the em driver doesn't recover
>normally is the issue now. You can't have a spurt of packets bringing down
>the system.
I need to take back what I said here. I ran the 145Kpps test on a
FreeBSD 4.9 system, and it not only handles it eloquently, but it
only runs at 30% cpu utilization. So I certainly HOPE that the dragonfly
system isn't interrupt bound, because if it is then something is very,
very wrong with the performance. There is definately something that
doesn't work right. Here is the output of vmstat -m right after the failure.
Memory statistics by type Type Kern
Type InUse MemUse HighUse Limit Requests Limit Limit Size(s)
atkbddev 2 1K 0K 24584K 2 0 0
pgrp 19 1K 0K 24584K 21 0 0
uc_devlist 13 1K 0K 24584K 13 0 0
nexusdev 4 1K 0K 24584K 4 0 0
memdesc 1 3K 0K 24584K 1 0 0
lockf 3 1K 0K 24584K 129 0 0
atexit 1 1K 0K 24584K 1 0 0
isadev 8 1K 0K 24584K 8 0 0
ZONE 17 2K 0K 24584K 17 0 0
VM pgdata 1 16K 0K 24584K 1 0 0
zombie 0 0K 0K 24584K 637 0 0
UFS dirhash 36 5K 0K 24584K 36 0 0
UFS mount 9 19K 0K 24584K 9 0 0
UFS ihash 1 128K 0K 24584K 1 0 0
FFS node 2070 486K 0K 24584K 2086 0 0
dirrem 0 0K 0K 24584K 16 0 0
diradd 0 0K 0K 24584K 21 0 0
freefile 0 0K 0K 24584K 13 0 0
freeblks 0 0K 0K 24584K 10 0 0
freefrag 0 0K 0K 24584K 3 0 0
allocdirect 1 1K 0K 24584K 31 0 0
bmsafemap 1 1K 0K 24584K 14 0 0
newblk 1 1K 0K 24584K 32 0 0
inodedep 2 129K 0K 24584K 40 0 0
pagedep 1 16K 0K 24584K 8 0 0
p1003.1b 1 1K 0K 24584K 1 0 0
agp 1 1K 0K 24584K 1 0 0
NFS hash 1 128K 0K 24584K 1 0 0
NQNFS Lease 1 1K 0K 24584K 1 0 0
NFS daemon 1 1K 0K 24584K 1 0 0
syncache 1 6K 0K 24584K 1 0 0
tcptemp 25 2K 0K 24584K 25 0 0
ipq 50 2K 0K 24584K 50 0 0
IpFw/IpAcct 1 1K 0K 24584K 1 0 0
in_multi 3 1K 0K 24584K 3 0 0
routetbl 21 2K 0K 24584K 40 0 0
ether_multi 12 1K 0K 24584K 12 0 0
ifaddr 23 4K 0K 24584K 23 0 0
BPF 3 1K 0K 24584K 3 0 0
MSDOSFS mount 1 128K 0K 24584K 1 0 0
vnodes 2091 458K 0K 24584K 2123 0 0
vnodeops 40 12K 0K 24584K 250 0 0
mount 4 2K 0K 24584K 6 0 0
vfscache 4639 441K 0K 24584K 4647 0 0
BIO buffer 62 66K 0K 24584K 96 0 0
proc-args 22 1K 0K 24584K 540 0 0
pcb 20 7K 0K 24584K 41 0 0
soname 1 1K 0K 24584K 68 0 0
kqueue 3 3K 0K 24584K 23 0 0
mbufcl 3064 3088K 0K 24584K 24396 0 0
mbuf 5533 1384K 0K 24584K 12219 0 0
ptys 1 1K 0K 24584K 1 0 0
ttys 378 49K 0K 24584K 858 0 0
sigio 1 1K 0K 24584K 1 0 0
file 66 5K 0K 24584K 1985 0 0
file desc 27 5K 0K 24584K 664 0 0
dev_t 192 15K 0K 24584K 192 0 0
shm 1 9K 0K 24584K 1 0 0
kld 4 1K 0K 24584K 35 0 0
module 88 5K 0K 24584K 88 0 0
varsym 220 7K 0K 24584K 234 0 0
ATA generic 12 2K 0K 24584K 12 0 0
AR driver 1 1K 0K 24584K 3 0 0
AD driver 1 1K 0K 24584K 1 0 0
ISOFS mount 1 128K 0K 24584K 1 0 0
sem 3 5K 0K 24584K 3 0 0
MD disk 2 1K 0K 24584K 2 0 0
msg 4 24K 0K 24584K 4 0 0
rman 54 3K 0K 24584K 451 0 0
pipe 8 2K 0K 24584K 8 0 0
ioctlops 0 0K 0K 24584K 8 0 0
taskqueue 2 1K 0K 24584K 2 0 0
SWAP 2 141K 0K 24584K 2 0 0
ATA CAM transport 2 1K 0K 24584K 20 0 0
ACD driver 2 2K 0K 24584K 2 0 0
kobj 63 71K 0K 24584K 63 0 0
eventhandler 13 1K 0K 24584K 13 0 0
bus 272 14K 0K 24584K 724 0 0
callout 1 64K 0K 24584K 1 0 0
sysctloid 73 2K 0K 24584K 73 0 0
sysctl 0 0K 0K 24584K 146 0 0
lwkt message 0 0K 0K 24584K 461352 0 0
MSFBUF 1 48K 0K 24584K 1 0 0
MPipe Array 6 1K 0K 24584K 6 0 0
ATAPI generic 1 1K 0K 24584K 17 0 0
temp 80 107K 0K 24584K 1064 0 0
devbuf 208 236K 0K 24584K 319 0 0
uidinfo 4 1K 0K 24584K 7 0 0
cred 13 2K 0K 24584K 17 0 0
subproc 39 4K 0K 24584K 679 0 0
proc 2 4K 0K 24584K 2 0 0
session 17 1K 0K 24584K 19 0 0
Memory Totals: In Use Free Requests
7485K 0K 516808
Getting back to my question about allocating memory for the kernel,
there is no way currently to do this in dragonfly as you could with
kern_vm_kmem_size before?
More information about the Users
mailing list