Stable tag slipped

Raphael Marmier raphael at
Mon Apr 4 15:52:22 PDT 2005

Matthew Dillon wrote:
:I've been running 3 loops of mirroring wget -m on the apache manual, 
:with the fetched page deleted in between, MaxClients=256, 
:MaxKeepAliveRequests=0. There is one loop runned localy.
:There are more than 4000 connections in TIME_WAIT, more than 4000 sockets.
:The server responds very well and there are no delays.
:Tomorrow I will try MaxKeepAliveRequests to an impossibly high number to 
:generate long running connections and see how it copes.
:I welcome suggestions on what test to run as I am no expert in neither 
:os nor networking.

    Have you adjusted the portrange?  Do these:

	sysctl net.inet.ip.portrange
	sysctl net.inet.tcp.msl
	sysctl kern.ipc.maxsockets
	sysctl net.inet.tcp.recvspace
	sysctl net.inet.tcp.sendspace
    You may also have to lower the MSL on the originating machines to reduce
    the number of sockets being held in a TIME_WAIT state.
	(default is 30000ms)
	sysctl net.inet.tcp.msl=15000
currently with the same tests still running:

dragonfly# netstat -tn | wc -l
dragonfly# netstat -tn | fgrep TIME_WAIT | wc -l
dragonfly# netstat -m
197/551/18176 mbufs in use (current/peak/max):
        150 mbufs allocated to data
        47 mbufs allocated to packet headers
114/246/4544 mbuf clusters in use (current/peak/max)
629 Kbytes allocated to network (4% of mb_map in use)
0 requests for memory denied
0 requests for memory delayed
0 calls to protocol drain routines
dragonfly# uptime
 0:45  up  1:31, 2 users, load averages: 0,19 0,28 0,30
dragonfly# uptime
 0:45  up  1:31, 2 users, load averages: 0,18 0,27 0,30
dragonfly# sysctl net.inet.ip.portrange
net.inet.ip.portrange.lowfirst: 1023
net.inet.ip.portrange.lowlast: 600
net.inet.ip.portrange.first: 1024
net.inet.ip.portrange.last: 5000
net.inet.ip.portrange.hifirst: 49152
net.inet.ip.portrange.hilast: 65535
dragonfly# sysctl net.inet.tcp.msl
net.inet.tcp.msl: 30000
dragonfly# sysctl kern.ipc.maxsockets
kern.ipc.maxsockets: 8104
dragonfly# sysctl net.inet.tcp.recvspace
net.inet.tcp.recvspace: 57344
dragonfly# sysctl net.inet.tcp.sendspace
net.inet.tcp.sendspace: 32768
The machine doing the hammering is a Mac running MacOSX.
[pomme:~] raphael% sysctl net.inet.tcp.msl
net.inet.tcp.msl: 600
[pomme:~] raphael% sysctl kern.ipc.maxsockets
kern.ipc.maxsockets: 512
Just keep in mind there is one loop running locally on the dfbsd machine 
being tested.

Why should the portrange be adjusted?


More information about the Kernel mailing list