Plans for 1.8+ (2.0?)
Bill Hacker
wbh at conducive.org
Sun Feb 18 19:03:54 PST 2007
Rupert Pigott wrote:
On Sun, 18 Feb 2007 14:25:57 +0100, Michel Talon wrote:
Rupert Pigott wrote:
On Thu, 01 Feb 2007 09:39:30 -0500, Justin C. Sherrill wrote:
True, but Matt has explained that ZFS doesn't provide the functionality
that DragonFlyBSD needs for cluster computing.
ZFS solves the problem of building a bigger fileserver, but it
doesn't help you distribute that file system across hundreds or thousands
of grid nodes. ZFS doesn't address the issue of high-latency
comms links between nodes, and NFS just curls up and dies when you try to
run it across the Atlantic with 100+ms of latency.
I don't know if IBM's GridFS does any better with the latency, but it
certainly scales a lot better but the barrier for adoption is $$$. It
costs $$$ and it costs a lot more $$$ to train up and hire the SAs to run
it. There are other options like AFS too, but people tend to be put off by
the learning curve and the fact it's an extra rather than something that
is packaged with the OS.
Of course it is none of my business, but i have always wandered about the
real usefulness of a clustering OS in the context of free systems, and you
post allows me to explain why.
Here's one reason : Free project wants to crack a particular problem that
needs massive amounts of cycles and/or IO bandwidth but no individual
can afford to run a datacentre. A distributed compile farm would fit
that bill.
Yes - but how many of these are needed, even globally?
People who have the money to buy machines by
the thousands, run them, pay the electricity bill, etc. should also have
the money to pay $$$ to IBM, and not count on the generosity of unpaid
developers. Small installations are the natural target of free systems,
Doesn't always happen that way. Quite frequently internal politics, short
sightedness, NIH, budget battles etc get in the way.
and in this context i remain convinced that the clustering ideas have an
utility next to null. And frankly, i doubt they have any utility for big
This would be an enabling technology. Big business doesn't innovate, the
little guys do the innovation. I didn't see big business amongst the early
adopters of Ethernet, TCP/IP, UNIX etc..
No - they had better technology.
;-)
All three of those fall into the 'last man standing' class of 'good enough'
compromises. None was - now or ever - at the top of the performance food chain.
All have, however, provided a lot of job growth and job security, and therein
lies a tale..
systems if you don't use high speed, low latency connects which are far
more expensive than the machines themselves. And even with this highly
Tell that to the folks who crack codes. Low latency is highly desirable
but it isn't essential for all problems... Render farms and compile
farms are good examples.
Yes, good example - but of both the utility of the 'feature' and its rarity as a
percentage of the 'computing world' at large.
expensive hardware, if you don't have high brain programmers able to
really make use of concurrency.
They help, but they aren't essential. There are a surprising number of
problems out there that can be cracked in a dumb way. :)
Too true! 'BFBI' shell, perl, et al...
On the contrary, the disks of Joe User are becoming bigger and bigger,
his processor is getting more and more cores, so there is clearly a need
for file systems appropriate for big disks and sufficiently reliable (
ZFS being an example ) and operating systems able to use multicores
efficiently.
I suspect that smaller slower cores are on the agenda for the great
unwashed masses. I am one of those people who thinks the days of the foot
warming tower case are numbered. Laptops, PDAs and Game Consoles already
out-ship desktops by a few orders of magnitude, I don't see that trend
swinging back the other way anytime soon.
True - we've converted all our 1U servers to C3, now looking at C7. But we are
not giving up performance with the reduced heat and UPS load - just shifting
heavier loads to Dual-Core, and lighter loads to 'powerful enough' for the job
at hand. So too with PDA et al. Most are still overkill for what they are used for.
I think you have also missed a point here. Applications like SETI just
weren't possible without the Grid concept (funding) - and people
really do want to do that kind stuff. Sure you and I might question the
utility of it, but the fact is it gave those guys a shot at doing
something way beyond their budget *without* having to resort to exotic
hardware or software.
Granted - but despite the very large number of machines involved in SETI, it is
still a barely visible fraction of the machines that *could* be involved in it.
For the record I cut my Parallel Processing teeth on OCCAM & Transputers.
This Grid stuff is neanderthal by comparison, but I have seen people
get real work out of it, and I can see a bunch of folks out there who
could also find it useful... Perhaps in the future you could contribute
your unused cycles & storage to web-serving & compiling for the DFly
project. I wouldn't mind that. :)
Cheers,
Rupert
I'm not 'anti clustering' just don't see it as broad enough of a need OR 'win'
to improve DFLY's perceived value.
The virtual kernel toolset - almost a 'byproduct' - seems to me both more
generally valuable and a more certain visible 'advantage' to more folks.
Even if you use only ONE of these, the ability to setup/test/tear-down, do-again
while preserving the 'controls' debugging and monitoring of the primary
environment seem very attractive to coding and testing.
ISTR that WinOS2 was used by Win 3.X developers for similar productivity gains
at one time.
Bill Hacker
More information about the Users
mailing list