OT ... the new buzz word: virtualization

Bill Hacker wbh at conducive.org
Thu Mar 31 22:04:05 PST 2005


Marc G. Fournier wrote:

Everyone seems to be jumping onto this one lately, and I'm curious as
 to exactly what it means, and what it can do ...
In early days, variants of virtual machine on 'Big Iron' mainframes[1]
were driven by the very high cost of the hardware and that applications
of the day didn't always need or *want* an 'operating system' - many
wished to use the hardware directly for best efficiency.  Virtualizing
permitted these to be kept separate between disparate user needs
and keep one customer's buggy driver from trashing an expensive
peripheral. Such folk sometimes wrote machine-code that played
recognizable 'music' on head-positioners of IBM disk drive arrays
that cost as much as a fair sized apartment building.
These days, the hardware is ridiculously cheap, at least by comparison,
and an 'Operating System' is taken for granted, even in 'embedded' systems.
'Virtualization Revanche' is back on-stage because for *most* work, the
CPU's and memory of today are vastly under-stressed and sit idle or
empty much of the time. Most things are I/O bound as to storage media,
network connectivity, keystrokes awaited, or all of the above.
- or would be if they weren't chasing their tail in Java...   ;-)
An AIX-5L (to name just one of several 'partitionable' commercial OS'),
VM or Xen environment allows sharing these otherwise-idle hardware
resources with lower risk of adverse interactions, and greater version,
or even 'race' independence than a single multi-user OS can offer.
The 'visible' CPU overhead can be surprisingly small.  Balancing storage
or network I/O needs is generally manageable. The biggest challenge is
usually providing enough 'real' memory.
And, of course, competant and available staff to configure, operate,
troubleshoot, and keep it all current and 'up' 24 X 7 X 365.
On that last score, virtualization, or even 'dual boot' has a tough go
competing against simply putting the 'other' needs on a separate box.
The 'nuisance factor' is more costly than keeping last-year's hardware
around somewhere.
Where a VM environment *will* catch-on, is when the hardware makers
build it in at the hardware level so there is no need for much
configuration or maintenance skill.  Just tick a box to allocate shares.
The crossbar switch used in IBM RS-6000 and GE-Honeywell-Bull Escala
servers - now migrated to the Mac- offers part of what is needed to do
this easily and well.  Add dual/multi-core CPU, a micro-exec 'console
controller', and we wouldn't be too far away from an interface that said
'how do you want to  partition...' any and all of the resources, not
just the HDD...
Even 'if/as/when' such a 'transparent' *hardware* environment where to
become cheap and available, (do not hold your breath waiting..), an OS
that uses a model that can handle 'probabalistic' I/O well (as DragonFly
will) has advantages.  The resources sharing can never be totally
isolated, just taking into account disk I/O peaks and valleys.
But I suspect that what we will *actually* get, is progressively smaller
and less-costly 'blade' boxen for commercial use, and continued use
of 'book' PC's where rackmount were meant to go, or even shelves full of
Mini-Mac's for some folks. Miminal labor cost, no need to even install
rack rails - just slap 'em on a shelf.
If Xen takes one man-day to install and set up, in most developed
countries, the fully-burdened cost to an employer of the 'one man day'
will buy one or more commodity boxes.  Even an IBM 1U e-Series server
ready to use with SCSI and all is cheaper than our 'day rate'.
And that is ONE day.....

Technology is as much slave to economics as master....

YMMV,

Bill Hacker





More information about the Users mailing list