Re: makefile(.abi) vs. autoconf/automake/libtool/etc.


Subject: Re: makefile(.abi) vs. autoconf/automake/libtool/etc.
From: Sam TH (sam@uchicago.edu)
Date: Thu Feb 15 2001 - 18:29:38 CST


On Thu, Feb 15, 2001 at 11:03:11AM -0800, Paul Rohr wrote:
> It looks like we have three kinds of folks on this list:
>
> A. people who understand (and like) our diving Makefile system
> B. people who like (and perhaps understand) the auto* system
> C. masochists willing to maintain parallel tool-specific build env.s
>
> I get the impression that these are disjoint sets, so to help out, here's an
> incomplete translation aid, from the perspective of our current build
> system. I'd love feedback and corrections from folks who grok the other
> perspective(s).

Well, I hope they aren't disjoint sets. I like to think I fall in
both A and B. I like the current system a lot. It's quite
impressive, and it does its job well. And I do like, and like to
think I understand, the autotools system.

I'm certainly not in category C, but if people want to be there (or
have to be there, because the Apple Unix-like shell couldn't be TOO
Unix like, no that would be too easy. Grr).

>
> Since we're talking about build systems, I've arbitrarily chosen the
> following criteria for comparison purposes. Feel free to propose others,
> along with an explanation of how they're handled in at least one of these
> worlds.
>
> criteria
> --------
> 1. platform support, including ease of adding new ones
> 2. toolchain required
> 3. build targets
> 4. dependencies
> 5. ease of maintenance in abi tree
> 6. ease of maintenance in peer modules
> 7. rebuild speed
> 8. full build speed
>

There's one more category I would like to propose, because it's the
main reason that I want to use autoconf:

9. Allows the elimination of lines 176-188 of ut_types.h.

That's the section where ICONV_CONST is defined. It's the worst hack
I've ever written, and it's going to break some system sometime. But
worse than all of those is this:
        It's a solved problem. People have dealt with just that sort
of issue before, and figured out how to deal with it. Dynamic tests
can be constructed, so that we don't have to keep adding defines. And
that's available with autoconf.

>
> A. diving Makefiles
> ====================
> The current diving make build system for AbiWord was designed by Jeff, a
> Makefile guru, as a streamlined variant of the kind of build system used for
> Mozilla.
>
> 1. platform support
> --------------------
> (strength) Runs using gmake and native compilers on every supported
> platform except legacy Macs. All configuration info for platform-specific
> tools are expressed using Makefile syntax in the following compact stubs:
>
> abi/src/config/platforms/win32.mk
> abi/src/config/platforms/linux.mk
> etc.
>
> Note that because we use a carefully-constrained subset of standard C
> libraries (more or less ANSI), anything beyond that gets wrapped with our
> own util functions here:
>
> abi/src/af/util/*
>
> This separation of platform-specific weirdness (tools vs. libraries) is
> worth noting.
>
> 2. toolchain required
> ----------------------
> (strength) Requires only gmake, sed, and a few other small tools. Most of
> these have command-line analogs on other platforms, and any syntax
> differences are easily learned.
>

This point is related to 1, but I'll make it here. The other major
requirement for the Makefile system is the Bourne Shell. This is a
requirement for running sed and make, and also for a number of the
tests. It's why we don't work on MPW (since that isn't a compatible
shell, although I'm not sure why it can't be ported. Leonard?)

The shell is present or available on basically every non-(classic Mac)
computer in the world today.

> 3. build targets
> -----------------
> (strength) Allows generation of multiple build variants in the same,
> unmodified tree -- just run with different environment settings, and you get
> different build targets here.
>
> abi/src/WIN32_1.1.8_i386_DBG/*
> abi/src/WIN32_1.1.8_i386_OBJ/*
> etc.
>
> Thus, to clean the tree, you just have to prune those directories.
>

This is a really nice feature. (It does mean that when you change
kernels, you need to do some manual pruning, but that's minor.)

This would be hard to support in the current form with autoconf (I
think). However, it could be hacked in, and if people really want it,
I'll do it.

However, the reason that it's not usually neccessary is that this
problem is solved in a very different way by autotools build systems.
The standard way of building with a configure script is to do it in
different directory than the source tree. This works sort of like
this:

        [sam@localhost /foo/bar/abi]$ ls
        src
        user
        etc
        ...
        [sam@localhost /foo/bar/abi]$ cd ..
        [sam@localhost /foo/bar]$ mkdir buildabi
        [sam@localhost /foo/bar]$ cd buildabi
        [sam@localhost /foo/bar/buildabi]$ ../abi/configure --options
        [sam@localhost /foo/bar/buildabi]$ ls
        Makefile
        [sam@localhost /foo/bar/buildabi]$ make
        lots of output here
        ...
        
Then, when you want to build with gnome enabled, or pspell, or
debugging, or whatever, you create a different build directory, and
repeat the process.

making clean in one of the build directories clean that directory, and
so on.

This poses a few complexities having to do with our peer directories,
but I think they are solvable.

> 4. dependencies
> ----------------
> (weakness) Doesn't attempt to track dependencies.
>
> 5. ease of maintenance in abi tree
> -----------------------------------
> (strength) As mentioned above, supporting new platforms consistently is
> easy, and the work scales appropriately.
>
> To add new files to the tree requires minimal Makefile maintenance at the
> appropriate nodes of the tree. Most of the work happens by including common
> *.mk stubs, so adding each new file is usually a one-line change. Each new
> directory added requires a Makefile at that node, plus a reference in the
> Makefile one level up. Again, the work scales appropriately.
>
> 6. ease of maintenance in peer modules
> ---------------------------------------
> (mixed) The strength is that by adding a single XP Makefile.abi to the peer
> module, we can guarantee that compatible object files and libraries are
> dropped into our build system at the appropriate spots, without otherwise
> affecting the source trees of the peer modules in any way.
>
> In short, this means that peer modules inherit advantages #1, 3, and 5
> above. Plus which, we can choose to build only the portions of peer modules
> that we need, in very different ways than the original maintainers intended,
> without affecting the integrity of their stuff.
>
> The weakness is that those Makefile.abi files aren't usually maintained by
> the owners of the respective modules. Thus, upstream changes to add or drop
> files need to get mirrored in Makefile.abi by one of us. The work scales
> appropriately, but it's annoying.
>

It should be noted that the current system involves exactly one
Makefile.abi, the one in wv. Both psiconv and expat use their native
build systems, as do the various other libraries (I believe).

> 7. rebuild speed
> -----------------
> (strength) Because this is a diving make system, rebuilds can be localized
> by diving to the appropriate level of the tree and doing the appropriate
> make variants (tidy, clean, realclean) there.
>
> The scale factors are nice here, because this mirrors and reinforces the
> modularity of the code. Localized API changes which only affect a small
> part of the tree can be rebuilt quickly. API changes which affect the
> entire tree require massive clean rebuilds of the tree (and usually get
> mentioned as such during commits).
>

Fundamentally, it works like this: if you change any significant
header file, you have to rebuild the whole abi tree. If you don't,
and it segfaults on something, then you rebuild the whole tree just to
make sure. At least that's how it ends up for me.

> 8. full build speed
> --------------------
> (unknown) Most of the overhead of a diving make system comes from
> repeatedly invoking make for yet another Makefile stub which is including
> the same sets of common logic. A potential downside is that most of the
> time spent isn't triggering make rules, but doing sed calculations, etc. to
> reestablish the path-relative build environment for yet another node of the
> tree.
>
> Still, the real test here is to do head-to-head comparisons.
>
>
> B. autoconf + automake + libtool
> =================================
> I'm starting to understand how this whole paradigm is supposed to work, but
> there may be plenty that I'm missing.
>
> Autoconf and friends are unix-centric tools that do a lot of shell-scripting
> magic to help abstract out the details of various platform-specific kinds of
> weirdness -- historically, there was a *ton* of gratuitous vendor-specific
> "differentiation" in the old-style Unix world -- and construct makefiles
> that should build properly on those platforms.
>
> 1. platform support
> --------------------
> (mixed) These tools are quite widely used, and somewhat well-understood, in
> various Unix communities -- some more than others. They're used to abstract
> out *both* of the following soucres of platform variations:
>
> - toolchain stuff (compiler/linker names and options)
> - crufty C library stuff
>

Actually, they can be used to handle the presence or abscence of
virtually any feature of the target system. For example, instead of
defining ABI_OPT_LIBXML2, an autoconf macro could detect what xml
libraries you had on your system, define the appropriate symbols, and
build with that library. You could, of course, override that choice.
For a real look a lots of different macros, for lots of different
things, check out
        http://cryp.to/autoconf-archive/

> These tool are almost never used anywhere else, where either raw Makefiles
> or more tool-specific project files are preferred.
>

For a history of these tools, see the Goat book at
        http://sources.redhat.com/autobook/

> 2. toolchain required
> ----------------------
> (mixed) The recurrent claim is that these tools are no more difficult to
> port than the lower-level toolchain used in the existing build system. This
> claim usually goes unproven, presumably because the intersection of the
> following two populations is pretty small:
>
> - auto* experts
> - people developing on non-Unix platforms
>
> Brave Sam, who falls in neither category AFAIK, is trying to address these
> problems anyhow. ;-)
>

Well, personally I think that our hardest-to-run-makefiles-on platform
is Windows, but definitely a close second is BeOS/PPC. And while it
still had telnet running (which for some unknown reason was closed,
causing us to lose a shipping platform), I compiled GNU Make, GNU
Autoconf, GNU Automake, GNU Libtool, and several projects using all of
the above.

Furthermore, people make significant effort to get the autotools to
work with Cygwin, both with gcc and with MSVC.

And I'm not an autotools expert yet, and I don't have any non-unix
platforms easily accessible, but I am trying. Anyone who wants to
donate a Windows box is welcome to. :-)

> 3. build targets
> -----------------
> (unknown) I have no idea whether or how an auto* toolchain can preserve the
> flexibility and cleanliness of the existing system as mentioned above.
>
> Since the final result of the auto* process are just Makefiles, I assume
> that this could eventually be done with sufficient work, but the opaqueness
> of those tools (to me, at least) makes it hard to assess how difficult
> this'd really be. It's not obvious at first glance that any of these tools
> were designed to meet this goal.
>

See my lengthy description of this above.

> 4. dependencies
> ----------------
> (mixed) These tools do support dependency tracking, but only for gcc users.
> This is nice for them, but does nothing for the rest of us. In fact, it
> could tend to reduce the awareness of locality fostered by the existing
> system (see #7 above).

Well, I have no idea what dependency tracking looks like for
cl.exe. If it has facilities to support this, they could be worked
in.

>
> 5. ease of maintenance in abi tree
> -----------------------------------
> (strength?) For the specific tasks mentioned in #5 above, I have no idea
> what the required work is. However, I assume it scales as well as the
> current approach, or we wouldn't be considering this at all.
>
> I'm getting a vague sense that the complete set of static Makefiles are
> built once using configure, and then automatically rebuilt as needed. Is
> this correct?
>

First, maintainability. The current system is about 10,000 lines of
Makefile code. I think the autotools can cut that in half, at least.
Since I'm the one who seems to do most of the Makefile hacking
recently, that sounds good to me. And the generated makefiles are
quite simple. Here's src/wp/ap/GNUmakefile.am:

LIBTOOL = @LIBTOOL@ --silent

SUBDIRS= xp unix

noinst_LTLIBRARIES= libAp.la

libAp.la: xp/libWpAp_xp.la @PLATFORM@/libWpAp_@PLATFORM@.la
        $(LIBTOOL) --mode=link $(CXX) -o libAp.la xp/libWpAp_xp.la \
                @PLATFORM@/libWpAp_@PLATFORM@.la

Pretty easy.

As for how the Makefiles get generated, the sequence goes like this:

Automake:
        takes as input: Makefile.am (and configure.in)
        outputs: Makefile.in
Autoconf:
        takes as input: configure.in (and a bunch of macros)
        outputs: configure
configure:
        takes as input: Makefile.in
        outputs: Makefile

Note that all but the configure step are platform independent, and
could be run on unix machines and then committed to CVS, if other
people don't want GNU M4 (an autoconf requirement, for example) on
their systems. Most systems just have Makefile.am and configure.in in
CVS, though.

> 6. ease of maintenance in peer modules
> ---------------------------------------
> (strength?) If we can figure out how to use unmodified makefiles as
> provided by the upstream maintainers, then that certainly minimizes
> integration headaches.
>
> The unknown here is how easy it will be to configure those modules to be
> built in the ways we need. The current approach seems to be to pass the
> contents of the following file as environment arguments to configure:
>
> abi/src/config/platforms/win32.mk (or the equivalent)
>
> This feels busted, so I assume that the real fix is to move more of the
> required platform-specific awareness to configure itself. Again, I have no
> idea how much of what kind of work that entails.
>

Passing those enviroment variables is definitely busted. We want to
pass options to configure that say "please use cl.exe as the
compiler", among other things. I'm sure this can be done.

> 7. rebuild speed
> -----------------
> (mixed) In theory, the static makefiles generated by autoconf and friends
> could be faster, since any and all platform-specific and path-specific
> configuration is hardwired into the makefile each time its rewritten.
>
> However, this would mean that makefiles would have to be regenerated for
> each variant configuration being built -- with the probable exception of
> debug vs. release, if the makefiles are written properly.
>

This is actually related to the solution to #3 above. You would have
a different set of generated Makefiles for each kind of tree (debug,
release, GNOME, etc).

> 8. full build speed
> --------------------
> (unknown) Again, the real test is to let 'em both rip on a few platforms
> (Unix and not) and see who wins.
>

Well, as one of the few people who has run them head to head, I think
that the autotools build system is faster. But that's a subjective
judgement.

The real speed increase, however, comes because auto* generates
Makefiles which can be run in parallel. This means that you can use
the -j option to make, which allows it to run multiple processes at
once. On a fast machine, where lots of time is spent in I/O, this can
be a significant speedup.

This doesn't work on the Makefile system, I've tried it. I don't know
enough about parallel make right now to change that, or to know if it
can be changed. I could, of course, learn. :-)

>
> C. parallel tool-specific build environments
> =============================================
> Most of us will continue to use a common build system, so that changes
> automatically show up on our platform, too. However, some folks love the
> feature sets of their IDE so much that they're willing to assume the
> maintenance burden of keeping a parallel build environment in sync -- for
> example, MSVC Project files.

Other than Hubert, who I feel bad for, I think that these aren't ever
going to be neccessary. However, more power to Mike for keeping them
up to date. I'm impressed.

Well, that was long. I hope it was informative, too.
           
        sam th
        sam@uchicago.edu
        http://www.abisource.com/~sam/
        GnuPG Key:
        http://www.abisource.com/~sam/key




This archive was generated by hypermail 2b25 : Thu Feb 15 2001 - 18:27:31 CST