apt-get

Andreas Hauser andy at splashground.de
Sun May 30 18:51:03 PDT 2004


dillon wrote @ Sun, 30 May 2004 16:01:19 -0700 (PDT):
>     Well, I must say this is an eye-opener!   The only exposure to apt-get
>     that I have had has been with personal workstations, and it seems to
>     work great for keeping things reasonably up to date.  But, obviously,
>     there are some serious issues when it comes to using it on large
>     clusters of machines.

The problems are more inherent in binary packages.
Say i want to use gcc3 instead of gcc2, then i need 
a complete different binary tree.
Imagine there a 4 different similar choices like gcc2 vs. gcc3.
You then need 2^4 = 16 different binary trees
(times all your supported architectures) to accomodate
all the combinations.

Problems start as soon as you want something nonstandard
(and you want that all the time like newest postfix,
a cyrus from this millenium, or just a mozilla not 7
minor versions behind - just to name a few debian related).
And mixing differnt binary package trees (like backports.org's
with woody's) opens a can of worms.

You end up rolling your own packages. (We have an installation
for an office that uses >200 self managed debian packages e.g.)
Managing your own debian package is about the same work as to manage
your own port, only that you start much earlier having
to do that with a binary packages only system and you
have to do it for nearly all the dependancies too.

>     So, maybe we should roll our own after all.

The problem doesn't seem to me to be the infrastructure
as much more the many packages and their interaction.
The VFS idea sure can help with the multiple version of libraries e.g.
great but why replace the working part of ports.
(I'm rather missing a discussion about which ports system to
extend instead instead of one about replacing it)
The big chunk of work is getting anywhere near 10000 packages.
And seldom the ports system itself seemed to be the cause
of the bigger chunks of work to get those ported.
At least for me the Makefile hassle is a minor annoyance
(but this can be improved, see ideas of netbsd, openbsd and gentoo
and there are more).

>     This does solidify my opinion that we need VFS environments to provide
>     truely isolated setups for various subsystems (e.g. apache vs login user
>     vs root vs mail vs whatever), and then run the port/packaging system
>     within each environment.  The one thing I have *always* dreaded when
>     doing large installs is that I would have half a dozen subsystems all
>     working properly and then I would try to install something for some
>     other subsystem and blow up one of the existing subsystems.  UNIX has
>     this wonderful concept of a 'user' which seriously under-used when it
>     comes to services.

I'm really looking forward to see what creative minds will make out of
these VFS environments e.g. this is also very interesting for a security
framework.

I offered to implement an emulation with mfs and union fs as a feature
into portupgrade on irc but got strong opposition against that (dirty)
hack (union mount a mfs over /usr/local, install, tar it ;)
(Another place for a hack to emulate part of it would be the runtime linker).

While i'm looking forward to that VFS feature and see a lot of potential
i think it could be integrated in ports and spare us to port 10000
(or let it be 1000 for starters) ports. I am fairly sure that can be
done with much less effort than completly starting from scratch.


Andy





More information about the Kernel mailing list