Any serious production servers yet?

Matthew Dillon dillon at
Tue May 30 12:09:41 PDT 2006

    Er.  Well, if I were talking about today I would be talking about today.
    I'm talking about the near-future, 2-3 years from now.  It would be the
    height of stupidity to have programming goals that only satisfy the
    needs of today.

    In 2-3 years single-core cpus will be relegated to niche status.  You
    won't be able *BUY* Intel or AMD single-core's at all for general purpose
    computers.  It won't matter a bit whether the average consumer is able
    use the extra computing power, it will be there anyway because it doesn't
    cost Intel or AMD any more to build it verses building single-core cpu's,
    it doesn't eat any more power either (in fact, it eats less, for more
    aggregate computing power).  So regardless of what you believe the
    future is quite clearly going to become permanently multi-core.

    In anycase, it's a mistake to assume that the extra computing power
    is wasted just because you can't think of anything that can use it
    right now.  That mentality is what caused Bill Gates to make the statement
    that no computer would ever need more then 640KB of memory.

    There are plenty of applications both existing and on the horizon that
    would be easily be able to use the additional computing power.  Even on a
    fast machine today SSH can still only encrypt at a 25-40MB/sec rate.
    Filesystems such as ZFS are far more computationally expensive then
    what we use today, but what you get for that price is an unbelievable
    level of stability and redundancy.  Photo-processing?  It takes my
    fastest box 4 hours to run through the fixups for one trip's worth of
    photos.  Since that workload is primarily userland, it only takes 
    2 hours on my dual-core box.  Encryption, Graphics, Photo-processing,
    Database operations.

    What else?  Interpreted languages are a big deal these days.  Java is
    still ridiculously slow compared a direct-coded program even on the
    fastest cpu.  How about virtualization?  With the fast external links
    becoming available (for example, AMD's externalized Hypertransport
    bus) it is far more economical to have a pure-computing box and use
    virtualization to split it up into many virtualized machines.  I know
    people that don't HAVE separate windoz boxes anymore, or even multi-boot
    systems any more.  They run all the OS's they need on one system,
    simultaniously, using VMWARE.

    So I am not stuck in the 'I don't need that much computing power' box.
    I *DO* need that much computing power.  If it means I don't have to 
    worry about losing my life's work from a disk failure, or it means I
    can have the 3-D desktop I've always dreamed about, or if it means
    I can run 20 parallel ssh links for backups, and a myrid of other
    reasons.  If it means a machine room can have fewer boxes in it (with
    the reduced cooling requirements), or a print preprocessor (another
    primarily user-level cpu hog) can shove a document out twice as fast,
    or even if it just means that a 10 year old kid can play his favorite
    MMPORG at 60 frames a second on the cheap.   Make no mistake, there are
    a billion things that the computing power can be used for.


More information about the Users mailing list