Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

1000x this. Nothing is more "entertaining" than watching a configure script run for a full minute, testing for the presence of standard headers like stdio.h (like wtf are you going to do without that?), in order to build a makefile that can then actually compile the project in only four seconds.

This is probably also why libtool's configure probes no fewer than 26 different names for the Fortran compiler my system does not have, and then spends another 26 tests to find out if each of these nonexistent Fortran compilers supports the -g option.

https://queue.acm.org/detail.cfm?id=2349257



I took my chances at using autotools in 2014.

I did this to see for myself if autotools bashing is justified and I can tell you this: If a configure script is that bad, then the project's developers did a poor job using autotools.


It's hard to believe the problem lies solely with the developers when every project is this bad. For fun, I just downloaded the source for GNU grep. You'd figure they might have some experience. Among the 600 tests configure performs is

    checking whether mkfifoat is declared without a macro... yes
Why? Nowhere in grep is there a single call to mkfifoat. Why does it care?

I also liked this:

    checking for MAP_ANONYMOUS... yes
    checking for MAP_ANONYMOUS... yes
    checking for MAP_ANONYMOUS... yes
    checking for MAP_ANONYMOUS... yes
    checking for MAP_ANONYMOUS... yes
    checking for MAP_ANONYMOUS... yes
    checking for MAP_ANONYMOUS... yes


The downside is that after you make it work there is no incentive/glamor in getting back to clean-up. Thus, not enough people get to have the minimum knowledge to fix things risking whatever worked to stop working.

And the work has a bigger risk because you can't actually check on more than a couple of distributions before you ship.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: