load balancing

David Cuthbert dacut at kanga.org
Fri Nov 3 23:37:45 PST 2006


1103073858.GB829 at xxxxxxxxxxxxxxxxx>
In-Reply-To: <20061103073858.GB829 at xxxxxxxxxxxxxxxxx>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
Lines: 22
Message-ID: <454c434a$0$788$415eb37d at xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
NNTP-Posting-Host: 67.182.133.151
X-Trace: 1162625866 crater_reader.dragonflybsd.org 788 67.182.133.151
Xref: crater_reader.dragonflybsd.org dragonfly.users:7942

Joerg Sonnenberger wrote:
> Only if you use a broken web server. With a proper O(1) event
> notification mechanism and async web servers, keep-alive is a huge win,
> if you don't have a single server load high enough to run out of file
> descriptors.

Heh... well:
- We only recently got proper O(1) event notifications,
- We definitely don't have an asynchronous server, and
- During the peak, our onlines run pretty darn close to full capacity.

To expand on the second point a bit: when you make a request to our 
server, that thread is tied up for the duration of your request.  Each 
server has a fixed number of threads, so tying one of these up to handle 
keep-alive would be a big deal.  Our way of gracefully handling a 
request which holds the thread too long is to just kill it (and log tons 
of errors and page a few engineers).  Yeah, ugly.

Even if we were to fix the thread issue, there's enough custom legacy 
crap which assumes a request-per-thread model that lots of stuff would 
break.  I'd like to think we're unique in being this crufty, but the 
anecdotal tales I hear lead me to believe this is about par... :-(





More information about the Users mailing list