autoconf (was: Compatability with FreeBSD Ports)

Joerg Sonnenberger joerg at britannica.bec.de
Fri Aug 12 12:08:22 PDT 2005


On Fri, Aug 12, 2005 at 11:33:40AM -0700, Claus Assmann wrote:
> > I think the point is, that instead of guessing, what an OS supports,
> > the makers of the OS shall say, what they think they support and you
> > only test for their claims.
> 
> How?

Look at what X11R6 did over all the years. imake was exactly about that,
a single configuration file for the *vendor* to provide.

> Is there something better than autoconf?

That's the wrong question.

> My configure script tests of course whether a feature actually works
> as much as that is possible.

It would be different from 95%+ of all configure scripts. Just to name a
few typical examples of incorrect autoconf use (in combination with the
rest of the GNU chain):
(a) libtool contains operating system specific magic in ltmain.sh, which
can easily be replaced by newer versions without hassle. But it also
duplicated part of this magic in libtool.m4, which gets embedded into
configure, normally approximately three times. This is not a problem for
most programs, but some decide to alter their behaviour based on whether
shared objects are supposed to be supported or not. A good (bad) example
is sudo, which doesn't build the exec LD_PRELOAD wrapper on DragonFly
without patching configure.
(b) pkgsrc's bootstrap checks for the existence of sys/statvfs.h and
statvfs(3). It uses the former if found and emulates the latter if not
found -- but it still tries to use sys/statvfs.h, even if it conflicts
with the compat definition. Sure, the problem here was DragonFly's fault
having an incomplete implementation, but the configure behaviour
nevertheless is broken as well.
(c) how many configure scripts check for one behaviour and ignore the
rest of the check?

Xorg decided to move to autoconf for the modular tree, net result is a
huge increase both in build time and distribution time, simply because
it needs CPU hours to run all the autoconf scripts and the space they
occupy in all the packages. IMO it's a real regression.

Testing for *special* features is OK, but it has to be done carefully
and doing it in almost any program is simply a pain in the ass. This
completely ignores broken tests and compatibility problems not checked
for. Just look at how many programs still want to include malloc.h or
define malloc themselve. Same for errno. Both are locations specified 15
years ago by ISO C90, but do you really believe the stupidness has gone
away over all that time? If any (old) platform still exists, which
doesn't follow this basic requirements, why isn't that checked by
autoconf and worked around *correctly*? There are guides for Linux which
even hint adding your own malloc prototype for portability -- resulting
in quiet the opposite behaviour.

Joerg





More information about the Users mailing list