- When waf is run with -v, and it runs a call to context.cmd_and_log() with an argument list,
argument[0] is a relative path, and a cwd **kwarg is passed so that the argument[0] resolves
correctly, then the call will crash saying the program could not be found. For example, the
caller may be wrapping calls using a nodejs environment like:
ctx.cmd_and_log("./node_modules/bin/webpack", cwd="webui")
and this will fail with "Program ./node_modules/.bin/webpack not found!"
if waf is run with -v. The user friendly check for usable programs still
stays in place for shell calls and absolute paths, but allows the caller
to use this pattern even when verbose mode is on. This same fix was
previously made for context.exec_command().
- When waf is run with -v, and it runs a call to context.exec_command() with an argument list,
argument[0] is a relative path, and a cwd **kwarg is passed so that the argument[0] resolves
correctly, then the call will crash saying the program could not be found. For example, the
caller may be wrapping calls using a nodejs environment like:
ctx.exec_command("./node_modules/bin/webpack", cwd="webui")
and this will fail with "Program ./node_modules/.bin/webpack not found!"
if waf is run with -v. The user friendly check for usable programs still
stays in place for shell calls and absolute paths, but allows the caller
to use this pattern even when verbose mode is on.
Implements support for Qt6 by extending qt5.py. The user can opt in for
Qt6 support by setting cfg.want_qt6 = True. There's also a qt6 feature,
which at the moment is identical to the qt5 feature. Splitting has been
done now for futureproofing purposes. Qt6 libraries can be selected
through the cfg.qt6_vars variable. I didn't make an attempt at any
backwards compatibility by trying to load cfg.qt5_vars if it exists,
this is done so the move from Qt5 to Qt6 is a more deliberate process.
Signed-off-by: Rafaël Kooi <3961583-RA-Kooi@users.noreply.gitlab.com>
Improves autodetection by adding tool naming as found in some recent
distributions (ie. Fedora). Adds also possibility to pass via env
command line options to tools (needed ie. to explicitly pass
generator due to changes to uic/rcc tool).
Test updated to demonstrate and document the parameter needed to
work out of the box with newest tooling.
shlib is no-op in some baremetal newlib based toolchains (for example in
riscv one), and causes the check to fail as the --dynamic flag is not
recognized
Make it easy to add custom target executions in the automatic
eclipse configuration generation, for example to call other
standard waf targets from other tools or with specific options.
Make sure just unique include paths (both system and local) are
added to prevent overcrowding with useless redundant include paths
that grow up a lot the generated XML file and make the usage of
the GUI messy.
The filter was already there for Java/Python.
Add automatic generation of editor language settings for C and C++,
so the automatic code correction uses the correct compiler and
compiler flags, including for example the correct C/C++ standard
so construct from such standards are correctly managed by the IDE.
Correct compiler and flags are automatically generated using the
build environment data gathered during configure phase.
The playground example has been modified to contain some code that
is standard specific to demonstrate the new feature when run under
Eclipse.
In the current implementation if a project is using
build variants it's not possible to use the clang_compilation_database
plugin because it strips the variant information from the build object.
Rework how gccdeps' cached_nodes lock is used so acquiring the lock is
only necessary on a cache miss. Also use a "with" context manager to
simplify management of the lock lifecycle.
Ported from 8b5a2a2086
Move the scan() method down in the file to match msvcdeps' method
ordering. This makes it easier to compare gccdeps.py and msvcdeps.py
to keep them in sync.
Visual Studio returns paths to dependencies with incorrect case.
ant_glob() is very slow for this use case (40~50% impact to overall
build time). This patch uses os.listdir() to find the correct case
of each path component.
In order to correctly set a default project in visual studio any folders
must be listed at the top of the solution file. This change ensures that
any folders included in generated solutions sort to the top of the .sln
file. The default project, if one exists, will be located after the
folders. Note that it should also be correct to place the default
at the top of the file, followed by any folders.
The unit test tool moved from a simple split to using shlex.split for
handling the unit test command. This results in the path separators on
windows being treated as escapes.
To handle this the unit test exec command is properly escaped before
joining so that the subsequent split restores the original arguments.
The quote function is also exposed in the Utilities module so that
wscripts making use of the unit test tool can properly quote their
contributions to the command as well.
Haxe support
This commit adds support for haxe over [lix](https://github.com/lix-pm/lix.client) toolkit
- haxe library validation: check and fetch missing libs if needed
- "haxe" loader with "hx" compiler
- HAXEFLAGS
- lib checking and uselib_store support
- ctx.haxe with `res` argument to be more simple
- error checking
See merge request ita1024/waf!2308
- Exclude classes having folder or symlinks
- Exclude well-known Task classes from wafcache processing
- Remove stale 'waflib.Task.Task.chmod' processing
Add support for MinIO object storage (https://min.io/) using the
MinIO client (https://github.com/minio/mc) to wafcache.
MinIO is an open-source, self-hostable, S3 compatible cache. The
MinIO client supports MinIO connections as well as normal S3/GCS
storages by configuring aliases beforehand.
Hint: some distributions have `mc` (the GNU Midnight Commander)
installed which is not the minio client, be aware of this (or your
build may get stuck with waf waiting for `mc` to never finish)
Print out which source file waf is gathering dependencies for and leave
the leading spaces in the dependency debug output because it can be
helpful to see the dependency hierarchy.
Add support for generating and using gcc's native dependency files with
the GNU Assembler in addition to the existing C/C++ support.
When the gas and gccdeps tools are loaded, the configure step will test
whether gcc operating on an assembly file supports the -MMD argument.
If so, waf will pass the -MMD argument to .S files assembled with gcc
which will cause it to generate .d dependency files. Waf will then parse
those files for dependency information.
Note: This will only work for assembly files compiled through the gcc
frontend, not with GNU as directly. It also requires assembly files to
use the uppercase .S file extension.
variable x is used in the outer loop and gets corrupted by inner enumeration in case of non-jar dependency
to reproduce: use the demos/java and run waf build twice: the first time will work (since no class files around)
while the second will not since will by bad luck pick a class file in the inner loop
clang_compilation_database: fix#2247, add clangdb command to generate database by request without rebuilding, add tests (WIP)
Closes#2247
See merge request ita1024/waf!2256
When using msvcdeps, header dependencies are not detected reliably for
generated source files. The root cause is a bug in versions of MSVC
prior to VS2019 16.0 in which it emits lower-case path prefixes when
resolving include paths relative to the containing file. Absolute paths
and paths relative to include directories passed in the MSVC command
line are, in contrast, case-correct.
Such a file-relative include directive with an incorrect lower-case
prefix derails waf's node hash signature handling and fails silently.
This change uses ant_glob() with the ignorecase keyword argument to
find the file on the filesystem with the correct case. The prior
case-correction code has been superseded and was removed.
See the following Visual Studio bug report for details on the issue:
https://developercommunity.visualstudio.com/content/problem/233871/showincludes-lowercases-some-path-segments.html
Make path_to_node() only accept a path as a string instead of also as a
list. That requires joining the list of path components in the relative
path case before calling path_to_node(). Also use path.pop(0) to remove
the first path component instead of copying the remainder of the path
using a slice operator.
Rework how msvcdeps' cached_nodes lock is used so acquiring the lock is
only necessary on a cache miss. Also use a "with" context manager to
simplify management of the lock lifecycle.
ant_matcher() converts an ANT glob pattern to an equivalent regex
pattern. This commit adds support for escaping parenthesis in the
input pattern so they don't end up being treated as a regex capture
group.
Also add a unit test to verify ant_glob()'s ability to handle special
characters in the input pattern.
Previously one could explicitly state to use PySide2 or PyQt4 but not PyQt5 which was picked just by default. In this way the option can override local configurations and also this prevents to have mixed tools versions if we are sure we need PyQt5.
This patch corrects an error in the exec_response_command exception
handler which always assumed that the execution's stdout would be bound
to the the WafError exception object.
However, this assumption is only true when the execution completes with
a non-zero status code. For other exceptions, the stdout attribute is
not bound.
Now, when stdout is not available, the WafError msg will be used
instead.
The order of the lines in a doxyfile are important. This patch uses an
ordered dictionary to keep the keys of the doxyfile in the same order.
This is particularly important for doxyfiles that contain @INCLUDE
lines. In such cases, if the dictionary is not ordered, the @INCLUDE
line can end up in the middle of the generated doxyfile and thus
override all entries that were seen before it.
Currently PDBs are only installed if the /DEBUG flag appears in the
current toolchain's LINKFLAGS attribute. This patch expands support
so that /DEBUG:FULL and /DEBUG:FASTLINK also cause PDBs to be
installed.
mac-o symbols are prefixed with an underscore. when specifying multiple
sub-regexes (e.g. 'sym1|sym2|sym3'), only the first will be matched
(since the expansion turns into '(?P<symbol>_?sym1|sym2|sym3)'). here,
this is remedied by wrapping the symbol regex in a paren group.
This patch attempts to detects if, when we are running from within an
MSYS2 environement (MSYSTEM is set) we are also executing inside an
MSYS2 provided version of python. It does this by assuming that if we
are not in a cygwin environment and we are building on windows, If the
value of sys.executable is /usr/bin or /bin or /usr/local/bin (somethign
unixy) then we are running in an MSYS2 python interpreter and shoudl
compensate for msys2 root paths. Otherwise we shouldn't be doing extra
path manipulation.
Not all tools executed by tasks support the '@argsfile' syntax for
shunting commandline arguments to a file. This means that if such
commands are shunted to a file early, he command will not work. On
windows the rc.exe command is such an example, but some tools on linux
have similar limitations. In the posix case, we artifically limit our
commandline size because it is difficult/variable to caluclate what the
actual limit is (it is partially dependent on environment size). This
could artifically cause commands to fail due to commandline length when
they otherwise wouldn't.
This patch fixes this issue by adding the 'allow_argsfile' flag to the
task. This way certain task instances will be able to specify if they
are compatible with the '@argsfile' syntax or not.
This patch addresses the bug described in issue #2225 where in using
posix paths and an empty PREFIX value can result in files being
installed to the root of the drive specified by destdir instead of to
the desired prefix value. This is a bug in the assumption that user
specified paths that are strings will contain directory separators that
match the target operating system.
The previous patches to workaround
http://support.microsoft.com/kb/830473 drastically over estimated the
number of characters in commands by treating the repr() version of the
command array as a reasonable estimator of commandline length. This
caused commands attempt to write argsfiles before they should have.
The new calculation calculates the number characters in the command
array and adds the number of spaces that would be added by ' '.join(cmd)
this provides a much closer estimate of the commandline length.
This also limits the CLI-length on non windows platforms to 200kB. This
prevents us hitting the much larger argument limits on Linux/BSD/MacOS
platforms.
Most of the ant_globs used are explicitly and knowingly on build directory
(ie. javadoc, jar re) so the warning is quite spurious. The only one that
may be in doubt is the source re one: I added also here because if you use
a code generator (ie. protoc) then it is also correct to glob on builds and
the warning is misleading.
The global value gccdeps was appended to CFLAGS and CXXFLAGS instead of
the actual flags tested against the compiler. This ignored
modifications to the GCCDEPS_FLAGS environment variable and complicated
adding support for additional compilers at the project level.
previously code was erroneously using tg.bld.path instead of tg.path
so for nested wscript calls the wrong directory was used in search.
added also better error handling with error message if an included
directory does not exist
When the 'path' argument was given at TaskGen creation, it was not taken
into account for attributing idx (the path of the build context was).
This is an issue when creating task generators from a waf tool because
their idxs were attributed as if they were in the project root directory,
even if another path was specified, which could lead to output files
collisions.
When CDT is not included in the project (ie. we just have Python and/or Java) the current implementation would not create automatically a call to waf
for the build stage. This patch adds in such cases an external builder that automates the call to waf without the need to manually configure one.
added support to search and add into source path also generated source
files for both java and python. this is useful when using generated code
(ie. protoc and pyqt5) so browsing in eclipse works correclty adding also
paths where generated code is done.
extended example in playground demostrating generated code
If any weights (i.e. `weight` or `tree_weight`) are set on a swig task
then those weights are passed on to the task created to compile the
wrapper generated by swig.
The previous logic in #1709 made an incorrect assumption that the
filename of shared/static library indicates that it was build as
multi-threaded or single threaded. This assumption does not hold in many
Linux distributions.
In addition to that. Boost.Thread and Boost.Log require -pthread (or
some other) flags in order to properly link.
When shared library compiled with precompiled headers enabled, this
change prevents precompiled headers to activate on dependent targets.
Otherwise, there is an issue with -fPIC flag propagation.
not sure if in very old nvcc version this was working anyway but as far as I can read on the documentation this should be the correct way and I tested it out
This adds a scaner method to track Erlang heders dependencies.
Support for EUnit tests
Support for EDocs
Support for ERL, ERLC, ERLC_FLAGS environment settings.
When a tool cannot be loaded the wrong path was displayed on the error
message. sys.path was always displayed but the actual path used depends
on tooldir being passed and on the value of with_sys_path parameter.
I put the exception handling (raising the fatal) inside load_tool itself
as this is the only place where the exact path is known, without having
to recalculate it outside. To be able to use fatal there also the ctx
has to be passed from the various points.
In this way all load_tool exceptions are caught and reported, while before
an exception during configure was not caught for example, just during the
options.
* use DEST_OS in cfg_cross_gnu
* add an example
* rename cfg_cross_gnu to cross_gnu
* add configure()
* xcheck_envar -> xcheck_var
* xcheck_var to look in environ only if not already set
* protoc: added java support
Modified protoc to support also .proto -> .java generation. the .java file
name generated is not obvious as in C++/Python but follows a couple of rules
that were implemented.
As cxx/python and javaw Tools are quite different the implementation is not
as clean as for cxx/python but is hopefully fine (ie. protoc still uses
sources for input files while javac uses src_dir).
In javaw a small detail was added: a new attribute was added (gencode) that
instructs javac to look for source files also in the build directory. This
are realistically generated code (and .proto -> .java is an example) and
are therefore in the build. Default is false keeping all the previous
behaviour.
* protoc for java enhanchments (protoc version, regex, docs)
In configure stage get protoc version as java naming changes depending on the
version. Implement the version differences between version < 2 and > 2
Improve regex for option catching and implement a mix of them in playground
to verify it.
Add some documentation on how java filenames and paths are generated.
* protoc: build dir with generated code is automatically added, so no need to explicitly use gencode in javac
Additionally:
- Scripting.parse_options is back for compatibility reasons
- The help message should only be displayed when this is intended
- OptionsContext is responsible for the full initialization, so
the framework should be usable without requiring Scripting.py
- Make it clear that Options.options is an optparse.Values object
- Get rid of the state in Options.options
The export symbol regular expression processing is updated to make
several improvements:
* The export expression (export_symbols_regex) now applies to both
functions and global variables
* A named capture group is used to match symbols. This allows the
export expression to contain capture groups without disrupting the
expression matching
The export symbol regular expression processing is updated to make
several improvements:
* The export expression (export_symbols_regex) now applies to both
functions and global variables
* A named capture group is used to match symbols. This allows the
export expression to contain capture groups without disrupting the
expression matching
E.g. when running 'waf xcode6 --targets=some-target'
File "/waf/waflib/Scripting.py", line 167, in waf_entry_point
run_commands()
File "/waf/waflib/Scripting.py", line 268, in run_commands
ctx = run_command(cmd_name)
File "/waf/waflib/Scripting.py", line 252, in run_command
ctx.execute()
File "/waf/waflib/extras/xcode6.py", line 679, in execute
self.post_group()
File "/waf/waflib/Build.py", line 767, in post_group
if self.current_group < self._min_grp:
AttributeError: 'xcode' object has no attribute '_min_grp'
Build flags like 'cflags', 'cxxflags' passed to xcode6 builds
are now considered by the xcode6 tool. For example, running command 'waf xcode6'
with the following wscript:
cnf.env.CXXFLAGS = ['-std=c++11']
...
bld.program(..., cxxflags='-O3')
now sets the OTHER_CPLUSCPLUSFLAGS in Xcode to '-O3 -std=c++11'
E.g. when running 'waf xcode6 --targets=some-target'
File "/waf/waflib/Scripting.py", line 167, in waf_entry_point
run_commands()
File "/waf/waflib/Scripting.py", line 268, in run_commands
ctx = run_command(cmd_name)
File "/waf/waflib/Scripting.py", line 252, in run_command
ctx.execute()
File "/waf/waflib/extras/xcode6.py", line 679, in execute
self.post_group()
File "/waf/waflib/Build.py", line 767, in post_group
if self.current_group < self._min_grp:
AttributeError: 'xcode' object has no attribute '_min_grp'
Build flags like 'cflags', 'cxxflags' passed to xcode6 builds
are now considered by the xcode6 tool. For example, running command 'waf xcode6'
with the following wscript:
cnf.env.CXXFLAGS = ['-std=c++11']
...
bld.program(..., cxxflags='-O3')
now sets the OTHER_CPLUSCPLUSFLAGS in Xcode to '-O3 -std=c++11'
The tool was using an relative path for the includes, but an absolute
for the src-files. Protoc cannot distinguish between relative and
absolute paths and is not able to find sources when relative and
absolute paths are combined.
Tested with protoc 2.6.1, python 3.5.1
The tool was using an relative path for the includes, but an absolute
for the src-files. Protoc cannot distinguish between relative and
absolute paths and is not able to find sources when relative and
absolute paths are combined.
Tested with protoc 2.6.1, python 3.5.1
* Let default uselib_store and define_name be upper case of the first
word of package. This is a better default when package includes a
version specified.
* Remove undocumented *k argument extraction from check_cfg since it
breaks when the first argument includes a version specifier.
* Validate msg in only on block.
* Reduce the number of places that set okmsg.
* Require exactly one action to be requested.
* Also print the detected version on successful modversion.
Because Python's set type is unordered, storing include paths in it
can produce unnecessary re-builds by generating different compiler
command lines between successive builds. Avoid this by using the
sorted() function on the includes.
The documentation for Python ≥ 2.7 guarantees that sorted() is stable,
while for Python 2.5–2.6 it uses the same algorithm as list.sort(),
which is stable [1].
[1]: https://stackoverflow.com/a/1915418
Because Python's set type is unordered, storing include paths in it
can produce unnecessary re-builds by generating different compiler
command lines between successive builds. Avoid this by using the
sorted() function on the includes.
The documentation for Python ≥ 2.7 guarantees that sorted() is stable,
while for Python 2.5–2.6 it uses the same algorithm as list.sort(),
which is stable [1].
[1]: https://stackoverflow.com/a/1915418
It appears that quite a few builds use the swig tool technique
of setting build dependencies after the build starts. Missing
entries in Runner/revdeps can make builds non-terminating.
- Have Task.weight apply to the current task only
- Do not rely on object addresses to set the build order
- Introduce tg.tg_idx_count to count task generators
- Enable propagating/non-propagating weights through Task.tree_weight/Task.weight
Vala compiler can use *.vapi files also as input files alongside *.vala
files. If you build a library, these vapi files are not included ín
resulting *.deps files and are, therefore, suitable for internal
purposes.
Signed-off-by: Jiří Janoušek <janousek.jiri@gmail.com>
Vala compiler can use *.vapi files also as input files alongside *.vala
files. If you build a library, these vapi files are not included ín
resulting *.deps files and are, therefore, suitable for internal
purposes.
Signed-off-by: Jiří Janoušek <janousek.jiri@gmail.com>
The 'source_files' param to the xcode6 tool was originally separated from the
conventional 'source' param because it was used to control how the source files
would appear in the XCode folder UI. Also, it'd allow to add any file extensions,
and not limited to those extensions supported by the loaded set of waf tools.
This commit renames 'source_files' param to 'group_files'. It also changes the semantic so that 'group_files' now is used like the following:
bld(
source='...', # These are now the files compiled by XCode
'group_files': ..., # Optionally customize the way source files appear i the UI
)
Previously, 'source_files' was used to collect source files for compilation in XCode, and to customize the UI folder structure. In this commit source_files is used only to let the user group files in different UI folders (and add additional resource files besides source files). I want to do the renaming to better reflect the param's meaning.
Additional changes:
* Remove unique_filereference
* Updated examples
The 'source_files' param to the xcode6 tool was originally separated from the
conventional 'source' param because it was used to control how the source files
would appear in the XCode folder UI. Also, it'd allow to add any file extensions,
and not limited to those extensions supported by the loaded set of waf tools.
This commit renames 'source_files' param to 'group_files'. It also changes the semantic so that 'group_files' now is used like the following:
bld(
source='...', # These are now the files compiled by XCode
'group_files': ..., # Optionally customize the way source files appear i the UI
)
Previously, 'source_files' was used to collect source files for compilation in XCode, and to customize the UI folder structure. In this commit source_files is used only to let the user group files in different UI folders (and add additional resource files besides source files). I want to do the renaming to better reflect the param's meaning.
Additional changes:
* Remove unique_filereference
* Updated examples
This gives the possibility to search on the system (QT5_LIBDIR or library automatically
found) for libraries available instead of using the list hardcoded in the tool. The search is
done using a regexp that matches the same files as the ones used for library search with support
for dynamic/static and win32/unix.
This change makes the tool more versatile to new versions of Qt5 as we don't have to maintain the
library list anymore. It should also make configure faster as just the libraries phisically present
will be tested upon. Custom libraries installed on top of base Qt5 will be also recognized with this
method.
This gives the possibility to search on the system (QT5_LIBDIR or library automatically
found) for libraries available instead of using the list hardcoded in the tool. The search is
done using a regexp that matches the same files as the ones used for library search with support
for dynamic/static and win32/unix.
This change makes the tool more versatile to new versions of Qt5 as we don't have to maintain the
library list anymore. It should also make configure faster as just the libraries phisically present
will be tested upon. Custom libraries installed on top of base Qt5 will be also recognized with this
method.
* glib2: Compile schemas per directory
By changing GSETTINGSSCHEMADIR during the build setup or on single tasks
or generators, the user may place schemas in various locations. Adding a
post build function for each of this location compiles all of them
instead of only one global directory.
* glib2: Notify user about failed schema compilation
* glib2: Demo schemas installed to multiple places
A new schema lacking lacking any enumerations was introduced. Installing
it isolated simplifies the generator creation to the essential
components demonstrated.
* glib2: Compile schemas per directory
By changing GSETTINGSSCHEMADIR during the build setup or on single tasks
or generators, the user may place schemas in various locations. Adding a
post build function for each of this location compiles all of them
instead of only one global directory.
* glib2: Notify user about failed schema compilation
* glib2: Demo schemas installed to multiple places
A new schema lacking lacking any enumerations was introduced. Installing
it isolated simplifies the generator creation to the essential
components demonstrated.
this can be very useful when debugging unit test problems as when run outside waf then the LD_LIBRARY_PATH (or PYTHONPATH if used with pytest extra) is set dynamically by waf and therefore running the executable manually or via gdb is not immediate
all the environment will be dumped in python script that can then be executed to run manually tests.
* cppcheck: fix extra forn multiple build rules are in a single wscript
When executed the output from cppcheck will be put inside cppcheck.xml and
then the generated error output inside cppcheck/index.html (and related
subfiles). Of course if two separate build rules are present the files
will clash with each other and data will be lost.
So this will not work in previous version:
bld.program(source=bld.path.ant_glob('src/ex-prog-*.cpp'), includes='src/', target='ex-prog-c')
bld.program(source=bld.path.ant_glob('src/ex-prog2-*.cpp'), includes='src/', target='ex-prog2-c')
In the output just one of the two results will be there (or in worst case
we will have files being deleted/garbled) as they both try to work on
cppcheck.xml and index.html (in build and build/cppcheck respectively)
With this commit the xml/html files have a reference to the task name (so
appended with a dash) so they are unique and don't clash. Also all the
messages to the user are corrected accordingly so the user is pointed to
the correct name of the file (and so are the internal links generated in
the html file).
In the previous case we will have:
ccpcheck detected (possible) problem(s) in task 'ex-prog2-c', see report for details:
file:///home/fede/waf/cppc/build/cppcheck/index-ex-prog2-c.html
ccpcheck detected (possible) problem(s) in task 'ex-prog-c', see report for details:
file:///home/fede/waf/cppc/build/cppcheck/index-ex-prog-c.html
* cppcheck: Provide as an option also old way of single index.html file for compatibility
Sometimes it is useful to be able to add a module to waf as a tool.
Using this patch one can use ./waf-light configure build --tools /tmp/mytool
This will add the files under /tmp/mytool under /waflib/extras/mytool. Such
that they can be imported in a wscript as from waflib.extras import mytool.
If compiler_cxx was not configured before qt5 then qt5 will try to build applications with an empty compiler which gives very strange errors in the config log
- Search for tools just in PATH not in other directories as for C++
- Remove options handling as there is none at the moment
- Use find_program instead of local find_bin
- Fix author
- Try to make documentation clearer
- Remove useless after_link decorator
Waf is a Python-based framework for configuring, compiling and installing applications. Here are perhaps the most important features of Waf:
* *Automatic build order*: the build order is computed from input and output files, among others
* *Automatic dependencies*: tasks to execute are detected by hashing files and commands
* *Performance*: tasks are executed in parallel automatically, the startup time is meant to be fast (separation between configuration and build)
* *Flexibility*: new commands and tasks can be added very easily through subclassing, bottlenecks for specific builds can be eliminated through dynamic method replacement
* *Extensibility*: though many programming languages and compilers are already supported by default, many others are available as extensions
* *IDE support*: Eclipse, Visual Studio and Xcode project generators (waflib/extras/)
* *Documentation*: the application is based on a robust model documented in [The Waf Book](https://waf.io/book/) and in the [API docs](https://waf.io/apidocs/)
* *Python compatibility*: cPython 2.5 to 3.4, Jython 2.5, IronPython, and Pypy
* *Automatic build order*: the build order is computed from input and output files, among others
* *Automatic dependencies*: tasks to execute are detected by hashing files and commands
* *Performance*: tasks are executed in parallel automatically, the startup time is meant to be fast (separation between configuration and build)
* *Flexibility*: new commands and tasks can be added very easily through subclassing, bottlenecks for specific builds can be eliminated through dynamic method replacement
* *Extensibility*: though many programming languages and compilers are already supported by default, many others are available as extensions
* *IDE support*: Eclipse, Visual Studio and Xcode project generators (`waflib/extras/`)
* *Documentation*: the application is based on a robust model documented in [The Waf Book](https://waf.io/book/) and in the [API docs](https://waf.io/apidocs/)
* *Python compatibility*: cPython 2.7 to 3.x, Jython 2.7 and PyPy
Waf is used in particular by innovative companies such as [Avalanche Studios](http://www.avalanchestudios.se) and by open-source projects such as [RTEMS](https://www.rtems.org/). Learn more about Waf by reading [The Waf Book](https://waf.io/book/).
Learn more about Waf by reading [The Waf Book](https://waf.io/book/). For researchers and build system writers, Waf also provides a framework and examples for creating [custom build systems](https://gitlab.com/ita1024/waf/tree/master/build_system_kit) and [package distribution systems](https://gitlab.com/ita1024/waf/blob/master/playground/distnet/README.rst).
For researchers and build system writers, Waf also provides a framework for creating [custom build systems](https://github.com/waf-project/waf/tree/master/build_system_kit) and [package distribution systems](https://github.com/waf-project/waf/tree/master/playground/distnet/README.rst).
Download the project from our page on [waf.io](https://waf.io/) or from a mirror on [freehackers.org](http://www.freehackers.org/~tnagy/release/), consult the [manual](https://waf.io/book/), the [API documentation](https://waf.io/apidocs/) and the [showcases](https://github.com/waf-project/waf/tree/master/demos) and [experiments](https://github.com/waf-project/waf/tree/master/playground).
Download the project from our page on [waf.io](https://waf.io/), consult the [manual](https://waf.io/book/), the [API documentation](https://waf.io/apidocs/) and the [showcases](https://gitlab.com/ita1024/waf/tree/master/demos) and [experiments](https://gitlab.com/ita1024/waf/tree/master/playground).
## HOW TO CREATE THE WAF SCRIPT
Python >= 2.6 is required to generate the waf script, and the resulting file can then run on Python 2.5.
Just execute:
```sh
$ ./waf-light configure build
```
Or, if you have several python versions installed:
```sh
$ python3 ./waf-light configure build
python ./waf-light configure build
```
## CUSTOMIZATION
The Waf tools in waflib/extras are not added to the waf script. To add
some of them, use the --tools switch:
some of them, use the --tools switch. An absolute path can be passed
if the module does not exist under the 'extras' folder:
```sh
$ ./waf-light --tools=compat15,swig
./waf-light --tools=swig
```
To add a tool that does not exist in the folder extras, pass an absolute path, and
to customize the initialization, pass the parameter 'prelude'. Here is for example
To customize the initialization, pass the parameter 'prelude'. Here is for example
how to create a waf file using the compat15 module:
An instance of the class _waflib.Context.Context_ is used by default for the custom commands. To provide a custom context object it is necessary to create a context subclass:
// advbuild_subclass
[source,python]
---------------
def configure(ctx):
print(type(ctx))
def foo(ctx): <1>
print(type(ctx))
def bar(ctx):
print(type(ctx))
from waflib.Context import Context
class one(Context):
cmd = 'foo' <2>
class two(Context):
cmd = 'tak' <3>
fun = 'bar'
---------------
<1> A custom command using the default context
<2> Bind a context class to the command _foo_
<3> Declare a new command named _tak_, but set it to call the script function _bar_
The execution output will be:
[source,shishell]
---------------
$ waf configure foo bar tak
Setting top to : /tmp/advbuild_subclass
Setting out to : /tmp/advbuild_subclass/build
<class 'waflib.Configure.ConfigurationContext'>
'configure' finished successfully (0.008s)
<class 'wscript.one'>
'foo' finished successfully (0.001s)
<class 'waflib.Context.Context'>
'bar' finished successfully (0.001s)
<class 'wscript.two'>
'tak' finished successfully (0.001s)
---------------
A typical application of custom context is subclassing the build context to use the configuration data loaded in *ctx.env*:
This technique is useful for writing testcases. By executing 'waf test', the following script will configure a project, create source files in the source directory, build a program, modify the sources, and rebuild the program. In this case, the program must be rebuilt because a header (implicit dependency) has changed.
When the top-level wscript is read, it is converted into a python module and kept in memory. Commands may be added dynamically by injecting the desired function into that module. We will now show how to bind a simple command from a Waf tool:
// advbuild_cmdtool
[source,python]
---------------
top = '.'
out = 'build'
def options(opt):
opt.load('some_tool', tooldir='.')
def configure(conf):
pass
---------------
Waf tools are loaded once for the configuration and for the build. To ensure that the tool is always enabled, it is mandatory to load its options, even if the tool does not actually provide options. Our tool 'some_tool.py', located next to the 'wscript' file, will contain the following code:
[source,python]
---------------
from waflib import Context
def cnt(ctx): <1>
"""do something"""
print('just a test')
Context.g_module.__dict__['cnt'] = cnt <2>
---------------
<1> The function to bind must accept a `Context` object as first argument
<2> The main wscript file of the project is loaded as a python module and stored as `Context.g_module`
The execution output will be the following.
[source,shishell]
---------------
$ waf configure cnt
Setting top to : /tmp/examples/advbuild_cmdtool
Setting out to : /tmp/advbuild_cmdtool/build
'configure' finished successfully (0.006s)
just a test
'cnt' finished successfully (0.001s)
---------------
=== Custom build outputs
==== Multiple configurations
The _WAFLOCK_ environment variable is used to control the configuration lock and to point at the default build directory. Observe the results on the following project:
// advbuild_waflock
[source,python]
---------------
def configure(ctx):
pass
def build(ctx):
ctx(rule='touch ${TGT}', target='foo.txt')
---------------
We will change the _WAFLOCK_ variable in the execution:
<1> The lock file points at the configuration of the project in use and at the build directory to use
<2> The files are output in the build directory +debug+
<3> The configuration _release_ is used with a different lock file and a different build directory.
<4> The contents of the project directory contain the two lock files and the two build folders.
The lock file may also be changed from the code by changing the appropriate variable in the waf scripts:
[source,python]
---------------
from waflib import Options
Options.lockfile = '.lock-wafname'
---------------
NOTE: The output directory pointed at by the waf lock file only has effect when not given in the waf script
==== Changing the output directory
===== Variant builds
In the previous section, two different configurations were used for similar builds. We will now show how to inherit the same configuration by two different builds, and how to output the targets in different folders. Let's start with the project file:
This chapter provides describes the Waf library and the interaction between the components.
=== Modules and classes
==== Core modules
Waf consists of the following modules which constitute the core library. They are located in the directory `waflib/`. The modules located under `waflib/Tools` and `waflib/extras` are extensions which are not part of the Waf core.
.List of core modules
[options="header", cols="1,6"]
|=================
|Module | Role
|Build | Defines the build context classes (build, clean, install, uninstall), which holds the data for one build (paths, configuration data)
|Configure | Contains the configuration context class, which is used for launching configuration tests and writing the configuration settings for the build
|ConfigSet | Contains a dictionary class which supports a lightweight copy scheme and provides persistence services
|Context | Contains the base class for all waf commands (context parameters of the Waf commands)
|Errors | Exceptions used in the Waf code
|Logs | Loggging system wrapping the calls to the python logging module
|Node | Contains the file system representation class
|Options | Provides a custom command-line option processing system based on optparse
|Runner | Contains the task execution system (thread-based producer-consumer)
|Scripting | Constitutes the entry point of the Waf application, executes the user commands such as build, configuration and installation
|TaskGen | Provides the task generator system, and its extension system based on method addition
|Task | Contains the task class definitions, and factory functions for creating new task classes
|Utils | Contains support functions and classes used by other Waf modules
|=================
Not all core modules are required for using Waf as a library. The dependencies between the modules are represented on the following diagram. For example, the module 'Node' requires both modules 'Utils' and 'Errors'. Conversely, if the module 'Build' is used alone, then the modules 'Scripting' and 'Configure' can be removed safely.
User commands, such as 'configure' or 'build', are represented by classes derived from 'waflib.Context.Context'. When a command does not have a class associated, the base class 'waflib.Context.Context' is used instead.
The method 'execute' is the start point for a context execution, it often calls the method 'recurse' to start reading the user scripts and execute the functions referenced by the 'fun' class attribute.
The command is associated to a context class by the class attribute 'cmd' set on the class. Context subclasses are added in 'waflib.Context.classes' by the metaclass 'store_context' and loaded through the function 'waflib.Context.create_context'. The classes defined last will replace existing commands.
As an example, the following context class will define or override the 'configure' command. When calling 'waf configure', the function 'foo' will be called from wscript files:
The class 'waflib.Build.BuildContext' and its subclasses such as 'waflib.Build.InstallContext' or 'waflib.Build.StepContext' have task generators created when reading the user scripts. The task generators will usually have task instances, depending on the operations performed after all task generators have been processed.
The 'ConfigSet' instances are copied from the build context to the tasks ('waflib.ConfigSet.ConfigSet.derive') to propagate values such as configuration flags. A copy-on-write is performed through most methods of that class (append_value, prepend_value, append_unique).
The 'Parallel' object encapsulates the iteration over all tasks of the build context, and delegates the execution to thread objects (producer-consumer).
The overall structure is represented on the following diagram:
The context commands are designed to be as independent as possible, and may be executed concurrently. The main application is the execution of small builds as part of configuration tests. For example, the method 'waflib.Configure.run_build' creates a private build context internally to perform the tests.
Here is an example of a build that creates and executes simple configuration contexts concurrently:
// architecture_link
[source,python]
---------------
import os
from waflib.Configure import conf, ConfigurationContext
<1> Create task generators which will run the method 'run_test' method defined below
<2> Create a new configuration context as part of a 'Task.run' call
<3> Initialize ctx.srcnode and ctx.bldnode (build and configuration contexts only)
<4> Set the internal counter for the context methods 'msg', 'start_msg' and 'end_msg'
<5> The console output is disabled (non-zero counter value to disable nested messages)
<6> Each context may have a logger to redirect the error messages
<7> Initialize the default environment to a copy of the task one
<8> Perform a configuration check
After executing 'waf build', the project folder will contain the new log files:
[source,shishell]
---------------
$ tree
.
|-- build
| |-- c4che
| | |-- build.config.py
| | `-- _cache.py
| |-- config.log
| |-- stdio.h.log
| `-- unistd.h.log
`-- wscript
---------------
A few measures are set to ensure that the contexts can be executed concurrently:
. Context objects may use different loggers derived from the 'waflib.Logs' module.
. Each context object is associated to a private subclass of 'waflib.Node.Node' to ensure that the node objects are unique. To pickle Node objects, it is important to prevent concurrent access by using the lock object 'waflib.Node.pickle_lock'.
==== Build context and persistence
The build context holds all the information necessary for a build. To accelerate the start-up, a part of the information is stored and loaded between the runs. The persistent attributes are the following:
.Persistent attributes
[options="header", cols="1,3,3"]
|=================
|Attribute | Description | Type
|root | Node representing the root of the file system | Node
|node_deps | Implicit dependencies | dict mapping Node to signatures
|raw_deps | Implicit file dependencies which could not be resolved | dict mapping Node ids to any serializable type
|task_sigs | Signature of the tasks executed | dict mapping a Task computed uid to a hash
|=================
=== Support for c-like languages
==== Compiled tasks and link tasks
The tool _waflib.Tools.ccroot_ provides a system for creating object files and linking them into a single final file. The method _waflib.Tools.ccroot.apply_link_ is called after the method _waflib.TaskGen.process_source_ to create the link task. In pseudocode:
[source,shishell]
---------------
call the method process_source:
for each source file foo.ext:
process the file by extension
if the method create_compiled_task is used:
create a new task
set the output file name to be foo.ext.o
add the task to the list self.compiled_tasks
call the method apply_link
for each name N in self.features:
find a class named N:
if the class N derives from 'waflib.Tools.ccroot.link_task':
create a task of that class, assign it to self.link_task
set the link_task inputs from self.compiled_tasks
set the link_task output name to be env.N_PATTERN % self.target
stop
---------------
This system is used for _assembly_, _C_, _C++_, _D_ and _fortran_ by default. Note that the method _apply_link_ is supposed to be called after the method _process_source_.
We will now demonstrate how to support the following mini language:
NOTE: Task generator instances have at most one link task instance
=== Writing re-usable Waf tools
==== Adding a waf tool
===== Importing the code
The intent of the Waf tools is to promote high cohesion by moving all conceptually related methods and classes into separate files, hidden from the Waf core, and as independent from each other as possible.
Custom Waf tools can be left in the projects, added to a custom waf file through the 'waflib/extras' folder, or used through 'sys.path' changes.
The tools can import other tools directly through the 'import' keyword. The scripts however should always import the tools to the 'ctx.load' to limit the coupling. Compare for example:
[source,python]
---------------
def configure(ctx):
from waflib.extras.foo import method1
method1(ctx)
---------------
and:
[source,python]
---------------
def configure(ctx):
ctx.load('foo')
ctx.method1()
---------------
The second version should be preferred, as it makes fewer assumptions on whether 'method1' comes from the module 'foo' or not, and on where the module 'foo' is located.
===== Naming convention for C/C++/Fortran
The tools 'compiler_c', 'compiler_cxx' and 'compiler_fc' use other waf tools to detect the presense of particular compilers. They provide a particular naming convention to give a chance to new tools to register themselves automatically and save the import in user scripts. The tools having names beginning by 'c_', 'cxx_' and 'fc_' will be tested.
The registration code will be similar to the following:
[source,python]
---------------
from waflib.Tools.compiler_X import X_compiler
X_compiler['platform'].append('module_name')
---------------
where *X* represents the type of compiler ('c', 'cxx' or 'fc'), *platform* is the platform on which the detection should take place (linux, win32, etc), and *module_name* is the name of the tool to use.
==== Command methods
===== Subclassing is only for commands
As a general rule, subclasses of 'waflib.Context.Context' are created only when a new user command is necessary. This is the case for example when a command for a specific variant (output folder) is required, or to provide a new behaviour. When this happens, the class methods 'recurse', 'execute' or the class attributes 'cmd', 'fun' are usually overridden.
NOTE: If there is no new command needed, do not use subclassing.
===== Domain-specific methods are convenient for the end users
Although the Waf framework promotes the most flexible way of declaring tasks through task generators, it is often more convenient to declare domain-specific wrappers in large projects. For example, the samba project provides a function used as:
[source,python]
---------------
bld.SAMBA_SUBSYSTEM('NDR_NBT_BUF',
source = 'nbtname.c',
deps = 'talloc',
autoproto = 'nbtname.h'
)
---------------
===== How to bind new methods
New methods are commonly bound to the build context or to the configuration context by using the '@conf' decorator:
We will now provide a detailed description of the build phase, which is used for processing the build targets.
=== Essential build concepts
==== Build order and dependencies
To illustrate the various concepts that are part of the build process, we are now going to use a new example.
The files +foo.txt+ and +bar.txt+ will be created by copying the file +wscript+, and the file +foobar.txt+ will be created from the concatenation of the generated files. Here is a summary: footnote:[It is actually considered a best practice to avoid copying files. When this is required, consider installing files or re-using the examples provided under the folder `demos/subst` of the source ditribution.]
[source,shishell]
---------------
cp: wscript -> foo.txt
cp: wscript -> bar.txt
cat: foo.txt, bar.txt -> foobar.txt
--------------
Each of the three lines represents a command to execute. While the _cp_ commands may be executed in any order or even in parallel, the _cat_ command may only be executed after all the others are done. The constraints on *the build order* are represented on the following http://en.wikipedia.org/wiki/Directed_acyclic_graph[Directed acyclic graph]:
image::dag_tasks{PIC}["Task representation of the same build"{backend@docbook:,width=260:},align="center"]
When the +wscript+ input file changes, the +foo.txt+ output file has to be created once again. The file +foo.txt+ is said to depend on the +wscript+ file. The *file dependencies* can be represented by a Direct acyclic graph too:
image::dag_nodes{PIC}["File dependencies on a simple build"{backend@docbook:,width=120:},align="center"]
Building a project consists in executing the commands according to a schedule which will respect these constraints. Faster build will be obtained when commands are executed in parallel (by using the build order), and when commands can be skipped (by using the dependencies).
In Waf, the commands are represented by *task objects*. The dependencies are used by the task classes, and may be file-based or abstract to enforce particular constraints.
==== Direct task declaration
We will now represent the build from the previous section by declaring the tasks directly in the build section:
<1> The tasks are not executed in the _clean_ command
<2> The build keeps track of the files that were generated to avoid generating them again
<3> Modify one of the source files
<4> Rebuild according to the dependency graph
Please remember:
. The execution order was *computed automatically*, by using the file inputs and outputs set on the task instances
. The dependencies were *computed automatically* (the files were rebuilt when necessary), by using the node objects (hashes of the file contents were stored between the builds and then compared)
. The tasks that have no order constraints are executed in parallel by default
==== Task encapsulation by task generators
Declaring the tasks directly is tedious and results in lengthy scripts. Feature-wise, the following is equivalent to the previous example:
The *ctx(...)* call is a shortcut to the class _waflib.TaskGen.task_gen_, instances of this class are called *task generator objects*. The task generators are lazy containers and will only create the tasks and the task classes when they are actually needed:
<2> The tasks created are stored in the list _tasks_ (0..n tasks may be added)
<3> Tasks are created after calling the method post() - it is usually called automatically internally
<4> A new task class was created dynamically for the target +foo+
==== Overview of the build phase
A high level overview of the build process is represented on the following diagram:
image::build_overview{PIC}["Overview of the build phase"{backend@docbook:,width=450:},align="center"]
NOTE: The tasks are all created before any of them is executed. New tasks may be created after the build has started, but the dependencies have to be set by using low-level apis.
=== More build options
Although any operation can be executed as part of a task, a few scenarios are typical and it makes sense to provide convenience functions for them.
==== Executing specific routines before or after the build
User functions may be bound to be executed at two key moments during the build command (callbacks):
. immediately before the build starts (bld.add_pre_fun)
. immediately after the build is completed successfully (bld.add_post_fun)
Here is how to execute a test after the build is finished:
<1> Install various files in the target destination
<2> Install one file, changing its name
<3> Create a symbolic link
<4> Overridding the configuration set ('env' is optional in the three methods install_files, install_as and symlink_as)
<5> Install src/bar/foo/a1.h as seen from the current script into '$\{PREFIX}/share/foo/a1.h'
<6> Install the png files recursively, preserving the folder structure read from src/bar/
NOTE: the methods _install_files_, _install_as_ and _symlink_as_ will do something only during _waf install_ or _waf uninstall_, they have no effect in other build commands
==== Listing the task generators and forcing specific task generators
The _list_ command is used to display the task generators that are declared:
In this case the +.so+ files were also rebuilt. Since the files attribute is interpreted as a comma-separated list of regular expressions, the following will produce a different output:
Transformations may be performed automatically based on the file name or on the extension.
==== Refactoring repeated rule-based task generators into implicit rules
The explicit rules described in the previous chapter become a limitation for processing several files of the same extension. The following code may lead to unmaintainable scripts and to slow builds (large amount of objects):
It is desirable to extract the rule from the user scripts in the following manner:
[source,python]
---------------
def build(bld):
bld(source='a.lua b.lua c.lua')
---------------
The following piece of code will enable this functionality. It may be inserted in a waf tool or in the same `wscript` file:
[source,python]
---------------
from waflib import TaskGen
TaskGen.declare_chain(
name = 'luac', <1>
rule = '${LUAC} -s -o ${TGT} ${SRC}', <2>
shell = False,
ext_in = '.lua', <3>
ext_out = '.luac', <4>
reentrant = False, <5>
install_path = '${LUADIR}', <6>
)
---------------
<1> The name for the corresponding task class to use
<2> The rule is the same as for any rule-based task generator. It is passed to the `run_str` attribute of a task class.
<3> Input file, processed by extension
<4> Output files extensions separated by spaces. In this case there is only one output file
<5> The reentrant attribute is used to add the output files as source again, for processing by another implicit rule
<6> String representing the installation path for the output files, similar to the destination path from 'bld.install_files'. To disable installation, set it to `False` or `None`.
==== Chaining more than one command
Now consider the long chain 'uh.in' → 'uh.a' → 'uh.b' → 'uh.c'. The following implicit rules demonstrate how to generate the files while maintaining a minimal user script:
Because transformation chains rely on implicit transformations, it may be desirable to hide some files from the list of sources. Or, some dependencies may be produced conditionally and may not be known in advance. A 'scanner method' is a kind of callback used to find additional dependencies just before the target is generated. For illustration purposes, let us start with an empty project containing three files: the 'wscript', 'ch.in' and 'ch.dep'
[source,shishell]
---------------
$ cd /tmp/smallproject
$ tree
.
|-- ch.dep
|-- ch.in
`-- wscript
---------------
The build will create a copy of 'ch.in' called 'ch.out'. Also, 'ch.out' must be rebuild whenever 'ch.dep' changes. This corresponds more or less to the following Makefile:
[source,make]
-----------------
ch.out: ch.in ch.dep
cp ch.in ch.out
-----------------
The user script should only contain the following code:
[source,python]
---------------
top = '.'
out = 'build'
def configure(conf):
pass
def build(bld):
bld(source = 'ch.in')
---------------
The code below is independent from the user scripts and may be located in a Waf tool.
[source,python]
---------------
def scan_meth(task): <1>
node = task.inputs[0]
dep = node.parent.find_resource(node.name.replace('.in', '.dep')) <2>
if not dep:
raise ValueError("Could not find the .dep file for %r" % node)
return ([dep], []) <3>
from waflib import TaskGen
TaskGen.declare_chain(
name = 'copy',
rule = 'cp ${SRC} ${TGT}',
ext_in = '.in',
ext_out = '.out',
reentrant = False,
scan = scan_meth, <4>
)
--------------
<1> The scanner method accepts a task object as input (not a task generator)
<2> Use node methods to locate the dependency (and raise an error if it cannot be found)
<3> Scanner methods return a tuple containing two lists. The first list contains the list of node objects to depend on. The second list contains private data such as debugging information. The results are cached between build calls so the contents must be serializable.
<4> Add the scanner method to chain declaration
The execution trace will be the following:
[source,shishell]
--------------
$ echo 1 > ch.in
$ echo 1 > ch.dep <1>
$ waf distclean configure build
'distclean' finished successfully (0.001s)
'configure' finished successfully (0.001s)
Waf: Entering directory `/tmp/smallproject/build'
[1/1] copy: ch.in -> build/ch.out <2>
Waf: Leaving directory `/tmp/smallproject/build'
'build' finished successfully (0.010s)
$ waf
Waf: Entering directory `/tmp/smallproject/build'
Waf: Leaving directory `/tmp/smallproject/build'
'build' finished successfully (0.005s) <3>
$ echo 2 > ch.dep <4>
$ waf
Waf: Entering directory `/tmp/smallproject/build'
[1/1] copy: ch.in -> build/ch.out <5>
Waf: Leaving directory `/tmp/smallproject/build'
'build' finished successfully (0.012s)
--------------
<1> Initialize the file contents of 'ch.in' and 'ch.dep'
<2> Execute a first clean build. The file 'ch.out' is produced
<3> The target 'ch.out' is up-to-date because nothing has changed
<4> Change the contents of 'ch.dep'
<5> The dependency has changed, so the target is rebuilt
Here are a few important points about scanner methods:
. they are executed only when the target is not up-to-date.
. they may not modify the 'task' object or the contents of the configuration set 'task.env'
. they are executed in a single main thread to avoid concurrency issues
. the results of the scanner (tuple of two lists) are re-used between build executions (and it is possible to access programatically those results)
. the make-like rules also accept a 'scan' argument (scanner methods are bound to the task rather than the task generators)
. they are used by Waf internally for c/c++ support, to add dependencies dynamically on the header files ('.c' → '.h')
==== Extension callbacks
In the chain declaration from the previous sections, the attribute 'reentrant' was described to control if the generated files are to be processed or not. There are cases however where one of the two generated files must be declared (because it will be used as a dependency) but where it cannot be considered as a source file in itself (like a header in c/c\++). Now consider the following two chains ('uh.in' → 'uh.a1' + 'uh.a2') and ('uh.a1' → 'uh.b') in the following example:
[source,python]
---------------
top = '.'
out = 'build'
def configure(conf):
pass
def build(bld):
obj = bld(source='uh.in')
from waflib import TaskGen
TaskGen.declare_chain(
name = 'a',
rule = 'cp ${SRC} ${TGT}',
ext_in = '.in',
ext_out = ['.a1', '.a2'],
reentrant = True,
)
TaskGen.declare_chain(
name = 'b',
rule = 'cp ${SRC} ${TGT}',
ext_in = '.a1',
ext_out = '.b',
reentrant = False,
)
--------------
The following error message will be produced:
[source,shishell]
--------------
$ waf distclean configure build
'distclean' finished successfully (0.001s)
'configure' finished successfully (0.001s)
Waf: Entering directory `/tmp/smallproject'
Waf: Leaving directory `/tmp/smallproject'
Cannot guess how to process bld:///tmp/smallproject/uh.a2 (got mappings ['.a1', '.in'] in
class TaskGen.task_gen) -> try conf.load(..)?
--------------
The error message indicates that there is no way to process 'uh.a2'. Only files of extension '.a1' or '.in' can be processed. Internally, extension names are bound to callback methods. The error is raised because no such method could be found, and here is how to register an extension callback globally:
[source,python]
---------------
@TaskGen.extension('.a2')
def foo(*k, **kw):
pass
---------------
To register an extension callback locally, a reference to the task generator object must be kept:
[source,python]
---------------
def build(bld):
obj = bld(source='uh.in')
def callback(*k, **kw):
pass
obj.mappings['.a2'] = callback
---------------
The exact method signature and typical usage for the extension callbacks is the following:
[source,python]
---------------
from waflib import TaskGen
@TaskGen.extension(".a", ".b") <1>
def my_callback(task_gen_object<2>, node<3>):
task_gen_object.create_task(
task_name, <4>
node, <5>
output_nodes) <6>
---------------
<1> Comma-separated list of extensions (strings)
<2> Task generator instance holding the data
<3> Instance of Node, representing a file (either source or build)
<4> The first argument to create a task is the name of the task class
<5> The second argument is the input node (or a list of nodes for several inputs)
<6> The last parameter is the output node (or a list of nodes for several outputs)
The creation of new task classes will be described in the next section.
==== Task class declaration
Waf tasks are instances of the class Task.TaskBase. Yet, the base class contains the real minimum, and the immediate subclass 'Task.Task' is usually chosen in user scripts. We will now start over with a simple project containing only one project 'wscript' file and and example file named 'ah.in'. A task class will be added.
[source,python]
---------------
top = '.'
out = 'build'
def configure(conf):
pass
def build(bld):
bld(source='uh.in')
from waflib import Task, TaskGen
@TaskGen.extension('.in')
def process(self, node):
tsk = self.create_task('abcd') <1>
print(tsk.__class__)
class abcd(Task.Task): <2>
def run(self): <3>
print('executing...')
return 0 <4>
---------------
<1> Create a new instance of 'abcd'. The method 'create_task' is a shortcut to make certain the task will keep a reference on its task generator.
<2> Inherit the class Task located in the module Task.py
<3> The method run is called when the task is executed
<4> The task return status must be an integer, which is zero to indicate success. The tasks that have failed will be executed on subsequent builds
The output of the build execution will be the following:
Although it is possible to write down task classes in plain python, two functions (factories) are provided to simplify the work, for example:
[source,python]
---------------
Task.simple_task_type( <1>
'xsubpp', <2>
rule = '${PERL} ${XSUBPP} ${SRC} > ${TGT}', <3>
color = 'BLUE', <4>
before = 'cc') <5>
def build_it(task):
return 0
Task.task_type_from_func(<6>
'sometask', <7>
func = build_it, <8>
vars = ['SRT'],
color = 'RED',
ext_in = '.in',
ext_out = '.out') <9>
---------------
<1> Create a new task class executing a rule string
<2> Task class name
<3> Rule to execute during the build
<4> Color for the output during the execution
<5> Execute the task instance before any instance of task classes named 'cc'. The opposite of 'before' is 'after'
<6> Create a new task class from a custom python function. The 'vars' attribute represents additional configuration set values to use as dependencies
<7> Task class name
<8> Function to use
<9> In this context, the extension names are meant to be used for computing the execution order with other tasks, without naming the other task classes explicitly
Note that most attributes are common between the two function factories. More usage examples may be found in most Waf tools.
==== Source attribute processing
The first step in processing the source file attribute is to convert all file names into Nodes. Special methods may be mapped to intercept names by the exact file name entry (no extension). The Node objects are then added to the task generator attribute 'source'.
The list of nodes is then consumed by regular extension mappings. Extension methods may re-inject the output nodes for further processing by appending them to the the attribute 'source' (hence the name re-entrant provided in declare_chain).
Due to the amount of features provided by Waf, this book cannot be both complete and up-to-date. For greater understanding and practice the following links are recommended to the reader:
.Recommended links
[options="header"]
|================
|Link|Description
|https://waf.io/apidocs/index.html|The apidocs
|https://waf.io|The Waf project page and downloads
The _configuration_ command is used to check if the requiremements for working on a project are met and to store the information. The parameters are then stored for use by other commands, such as the build command.
=== Using persistent data
==== Sharing data with the build
The configuration context is used to store data which may be re-used during the build. Let's begin with the following example:
<1> Store the option _foo_ into the variable _env_ (dict-like structure)
<2> Configuration routine used to find the program _touch_ and to store it into _ctx.env.TOUCH_ footnote:['find_program' may use the same variable from the OS environment during the search, for example 'CC=gcc waf configure']
<3> Print the value of _ctx.env.FOO_ that was set during the configuration
<4> The variable _$\{TOUCH}_ corresponds to the variable _ctx.env.TOUCH_.
<1> Output of the configuration test _find_program_
<2> The value of _TOUCH_
<3> Command-line used to create the target 'foo.txt'
The variable _ctx.env_ is called a *Configuration set*, and is an instance of the class 'ConfigSet'. The class is a wrapper around Python dicts to handle serialization. For this reason it should be used for simple variables only (no functions or classes). The values are stored in a python-like format in the build directory:
[source,shishell]
---------------
$ tree
build/
|-- foo.txt
|-- c4che
| |-- build.config.py
| `-- _cache.py
`-- config.log
$ cat build/c4che/_cache.py
FOO = 'abcd'
PREFIX = '/usr/local'
TOUCH = '/usr/bin/touch'
---------------
NOTE: Reading and writing values to _ctx.env_ is possible in both configuration and build commands. Yet, the values are stored to a file only during the configuration phase.
==== Configuration set usage
We will now provide more examples of the configuration set usage. The object *ctx.env* provides convenience methods to access its contents:
<2> Attribute-based access (the two forms are equivalent)
<3> Append each element to the list _ctx.env.CXXFLAGS_, assuming it is a list
<4> Insert the values at the beginning. Note that there is no such method as _prepend_unique_
The execution will produce the following output:
[source,shishell]
---------------
$ waf configure
<class 'waflib.ConfigSet.ConfigSet'> <1>
'CFLAGS' ['-O3', '-g', '-O2'] <2>
'CXXFLAGS' ['-O2', '-g']
'PREFIX' '/usr/local'
[] <3>
$ cat build/c4che/_cache.py <4>
CFLAGS = ['-O3', '-g', '-O2']
CXXFLAGS = ['-O2', '-g']
PREFIX = '/usr/local'
---------------
<1> The object _conf.env_ is an instance of the class ConfigSet defined in _waflib/ConfigSet.py_
<2> The contents of _conf.env_ after the modifications
<3> When a key is undefined, it is assumed that it is a list (used by *append_value* above)
<4> The object _conf.env_ is stored by default in this file
Copy and serialization apis are also provided:
// configuration_copysets
[source,python]
---------------
top = '.'
out = 'build'
def configure(ctx):
ctx.env.FOO = 'TEST'
env_copy = ctx.env.derive() <1>
node = ctx.path.make_node('test.txt') <2>
env_copy.store(node.abspath()) <3>
from waflib.ConfigSet import ConfigSet
env2 = ConfigSet() <4>
env2.load(node.abspath()) <5>
print(node.read()) <6>
---------------
<1> Make a copy of _ctx.env_ - this is a shallow copy
<2> Use *ctx.path* to create a node object representing the file +test.txt+
<3> Store the contents of *env_copy* into +test.txt+
<4> Create a new empty ConfigSet object
<5> Load the values from +test.txt+
<6> Print the contents of +test.txt+
Upon execution, the output will be the following:
[source,shishell]
---------------
$ waf distclean configure
'distclean' finished successfully (0.005s)
FOO = 'TEST'
PREFIX = '/usr/local'
'configure' finished successfully (0.006s)
---------------
// ===== multiple configuration sets?
=== Configuration utilities
==== Configuration methods
The method _ctx.find_program_ seen previously is an example of a configuration method. Here are more examples:
// configuration_methods
[source,python]
---------------
top = '.'
out = 'build'
def configure(ctx):
ctx.find_program('touch', var='TOUCH')
ctx.check_waf_version(mini='{version}')
ctx.find_file('fstab', ['/opt', '/etc'])
---------------
Although these methods are provided by the context class _waflib.Configure.ConfigurationContext_, they will not appear on it in https://waf.io/apidocs/index.html[API documentation]. For modularity reasons, they are defined as simple functions and then bound dynamically:
[source,python]
---------------
top = '.'
out = 'build'
from waflib.Configure import conf <1>
@conf <2>
def hi(ctx):
print('→ hello, world!')
# hi = conf(hi) <3>
def configure(ctx):
ctx.hi() <4>
---------------
<1> Import the decorator *conf*
<2> Use the decorator to bind the method _hi_ to the configuration context and build context classes. In practice, the configuration methods are only used during the configuration phase.
<3> Decorators are simple python function. Python 2.3 does not support the *@* syntax so the function has to be called after the function declaration
<4> Use the method previously bound to the configuration context class
The execution will produce the following output:
[source,shishell]
---------------
$ waf configure
→ hello, world!
'configure' finished successfully (0.005s)
---------------
==== Loading and using Waf tools
For efficiency reasons, only a few configuration methods are present in the Waf core. Most configuration methods are loaded by extensions called *Waf tools*.
The main tools are located in the folder +waflib/Tools+, and the tools in testing phase are located under the folder +waflib/extras+.
Yet, Waf tools may be used from any location on the filesystem.
We will now demonstrate a very simple Waf tool named +dang.py+ which will be used to set 'ctx.env.DANG' from a command-line option:
<1> First the tool is imported as a python module, and then the method _configure_ is called by _load_
<2> The tools loaded during the configuration will be loaded during the build phase
==== Multiple configurations
The 'conf.env' object is an important point of the configuration which is accessed and modified by Waf tools and by user-provided configuration functions. The Waf tools do not enforce a particular structure for the build scripts, so the tools will only modify the contents of the default object. The user scripts may provide several 'env' objects in the configuration and pre-set or post-set specific values:
[source,python]
---------------
def configure(ctx):
env = ctx.env <1>
ctx.setenv('debug') <2>
ctx.env.CC = 'gcc' <3>
ctx.load('gcc')
ctx.setenv('release', env) <4>
ctx.load('msvc')
ctx.env.CFLAGS = ['/O2']
print ctx.all_envs['debug'] <5>
---------------
<1> Save a reference to 'conf.env'
<2> Copy and replace 'conf.env'
<3> Modify 'conf.env'
<4> Copy and replace 'conf.env' again, from the initial data
<5> Recall a configuration set by its name
=== Exception handling
==== Launching and catching configuration exceptions
Configuration helpers are methods provided by the conf object to help find parameters, for example the method 'conf.find_program'
[source,python]
---------------
top = '.'
out = 'build'
def configure(ctx):
ctx.find_program('some_app')
---------------
When a test cannot complete properly, an exception of the type 'waflib.Errors.ConfigurationError' is raised. This often occurs when something is missing in the operating system environment or because a particular condition is not satisfied. For example:
[source,shishell]
---------------
$ waf
Checking for program some_app : not found
error: The program some_app could not be found
---------------
These exceptions may be raised manually by using 'conf.fatal':
[source,python]
---------------
top = '.'
out = 'build'
def configure(ctx):
ctx.fatal("I'm sorry Dave, I'm afraid I can't do that")
---------------
Which will display the same kind of error:
[source,shishell]
---------------
$ waf configure
error: I'm sorry Dave, I'm afraid I can't do that
$ echo $?
1
---------------
Here is how to catch configuration exceptions:
// configuration_exception
[source,python]
---------------
top = '.'
out = 'build'
def configure(ctx):
try:
ctx.find_program('some_app')
except ctx.errors.ConfigurationError: <1>
ctx.to_log('some_app was not found (ignoring)') <2>
---------------
<1> For convenience, the module _waflib.Errors_ is bound to _ctx.errors_
<2> Adding information to the log file
The execution output will be the following:
[source,shishell]
---------------
$ waf configure
Checking for program some_app : not found
'configure' finished successfully (0.029s) <1>
$ cat build/config.log <2>
# project configured on Tue Jul 13 19:15:04 2010 by
# waf {version} (abi 98, python 20605f0 on linux2)
from /tmp/configuration_exception: The program ['some_app'] could not be found
some_app was not found (ignoring) <3>
---------------
<1> The configuration completes without errors
<2> The log file contains useful information about the configuration execution
<3> Our log entry
Catching the errors by hand can be inconvenient. For this reason, all *@conf* methods accept a parameter named 'mandatory' to suppress configuration errors. The code snippet is therefore equivalent to:
[source,python]
---------------
top = '.'
out = 'build'
def configure(ctx):
ctx.find_program('some_app', mandatory=False)
---------------
As a general rule, clients should never rely on exit codes or returned values and must catch configuration exceptions. The tools should always raise configuration errors to display the errors and to give a chance to the clients to process the exceptions.
==== Transactions
Waf tools called during the configuration may use and modify the contents of 'conf.env' at will. Those changes may be complex to track and to undo. Fortunately, the configuration exceptions make it possible to simplify the logic and to go back to a previous state easily. The following example illustrates how to use a transaction to to use several tools at once:
[source,python]
---------------
top = '.'
out = 'build'
def configure(ctx):
for compiler in ('gcc', 'msvc'):
try:
ctx.env.stash()
ctx.load(compiler)
except ctx.errors.ConfigurationError:
ctx.env.revert()
else:
break
else:
ctx.fatal('Could not find a compiler')
---------------
Though several calls to 'stash' can be made, the copies made are shallow, which means that any complex object (such as a list) modification will be permanent. For this reason, the following is a configuration anti-pattern:
[source,python]
---------------
def configure(ctx):
ctx.env.CFLAGS.append('-O2')
---------------
The methods should always be used instead:
[source,python]
---------------
def configure(ctx):
ctx.env.append_value('CFLAGS', '-O2')
---------------
////
To conclude this chapter on the configuration, we will now insist a little bit on the roles of the configuration context and of the configuration set objects. The configuration context is meant as a container for non-persistent data such as methods, functions, code and utilities. This means in particular that the following is an acceptable way of sharing data with scripts and tools:
In practice, values are frequently needed in the build section too. Adding the data to 'conf.env' is therefore a logical way of separating the concerns between the code (configuration methods) and the persistent data.
Although Waf is language neutral, it is used very often for C and C++ projects. This chapter describes the Waf tools and functions used for these languages.
=== Common script for C, C++ and D applications
==== Predefined task generators
The C/C++ builds consist in transforming (compiling) source files into object files, and to assemble (link) the object files at the end. In theory a single programming language should be sufficient for writing any application, but the situation is usually more complicated:
. Source files may be created by other compilers in other languages (IDL, ASN1, etc)
. Additional files may enter in the link step (libraries, object files) and applications may be divided in dynamic or static libraries
. Different platforms may require different processing rules (manifest files on MS-Windows, etc)
To conceal the implementation details and the portability concerns, each target (program, library) can be wrapped as single task generator object as in the following example:
<1> Use compiler_c to load the c routines and to find a compiler (for c++ use 'compiler_cxx' and 'compiler_d' for d)
<2> Declare a program built from _main.c_ and using two other libraries
<3> Declare a static library
<4> Declare a shared library, using the objects from 'myobjects'
The targets will have different extensions and names depending on the platform. For example on Linux, the contents of the build directory will be:
[source,shishell]
---------------
$ tree build
build/
|-- c4che
| |-- build.config.py
| `-- _cache.py
|-- a.c.1.o
|-- app <1>
|-- b.c.2.o
|-- c.c.3.o
|-- config.log
|-- libmyshlib.so <2>
|-- libmystlib.a
`-- main.c.0.o <3>
---------------
<1> Programs have no extension on Linux but will have '.exe' on Windows
<2> The '.so' extension for shared libraries on Linux will be '.dll' on Windows
<3> The '.o' object files use the original file name and an index to avoid errors in multiple compilations
The build context methods _program_, _shlib_, _stlib_ and _objects_ return a single task generator with the appropriate features detected from the source list. For example, for a program having _.c_ files in the source attribute, the features added will be _"c cprogram"_, for a _d_ static library, _"d dstlib"_.
==== Additional attributes
The methods described previously can process many more attributes than just 'use'. Here is an advanced example:
[source,python]
---------------
def options(opt):
opt.load('compiler_c')
def configure(conf):
conf.load('compiler_c')
def build(bld):
bld.program(
source = 'main.c', <1>
target = 'appname', <2>
features = ['more', 'features'], <3>
includes = ['.'], <4>
defines = ['LINUX=1', 'BIDULE'],
lib = ['m'], <5>
libpath = ['/usr/lib'],
stlib = ['dl'], <6>
stlibpath = ['/usr/local/lib'],
linkflags = ['-g'], <7>
rpath = ['/opt/kde/lib'] <8>
vnum = '1.2.3',
install_path = '${SOME_PATH}/bin', <9>
cflags = ['-O2', '-Wall'], <10>
cxxflags = ['-O3'],
dflags = ['-g'],
)
---------------
<1> Source file list
<2> Target, converted automatically to +target.exe+ or +libtarget.so+, depending on the platform and type
<3> Additional features to add (for a program consisting in c files, the default will be _'c cprogram'_)
<4> Includes and defines
<5> Shared libraries and shared libraries link paths
<6> Static libraries and link paths
<7> Use linkflags for specific link flags (not for passing libraries)
<8> rpath and vnum, ignored on platforms that do not support them
<9> Programs and shared libraries are installed by default. To disable the installation, set None.
<10> Miscalleneous flags, applied to the source files that support them (if present)
=== Include processing
==== Execution path and flags
Include paths are used by the C/C++ compilers for finding headers. When one header changes, the files are recompiled automatically. For example on a project having the following structure:
[source,shishell]
---------------
$ tree
.
|-- foo.h
|-- src
| |-- main.c
| `-- wscript
`-- wscript
---------------
The file 'src/wscript' will contain the following code:
[source,python]
---------------
def build(bld):
bld.program(
source = 'main.c',
target = 'myapp',
includes = '.. .')
---------------
The command-line (output by `waf -v`) will have the following form:
[source,shishell]
---------------
cc -I. -I.. -Isrc -I../src ../src/main.c -c -o src/main_1.o
---------------
Because commands are executed from the build directory, the folders have been converted to include flags in the following way:
[source,shishell]
---------------
.. -> -I.. -I.
. -> -I../src -Isrc
---------------
There are the important points to remember:
. The includes are always given relative to the directory containing the wscript file
. The includes add both the source directory and the corresponding build directory for the task generator variant
. Commands are executed from the build directory, so the include paths must be converted
. System include paths should be defined during the configuration and added to INCLUDES variables (uselib)
==== The Waf preprocessor
Waf uses a preprocessor written in Python for adding the dependencies on the headers. A simple parser looking at #include statements would miss constructs such as:
[source,c]
---------------
#define mymacro "foo.h"
#include mymacro
---------------
Using the compiler for finding the dependencies would not work for applications requiring file preprocessing such as Qt. For Qt, special include files having the '.moc' extension must be detected by the build system and produced ahead of time. The c compiler could not parse such files.
[source,c]
---------------
#include "foo.moc"
---------------
Since system headers are not tracked by default, the waf preprocessor may miss dependencies written in the following form:
[source,c]
---------------
#if SOMEMACRO
/* an include in the project */
#include "foo.h"
#endif
---------------
To write portable code and to ease debugging, it is strongly recommended to put all the conditions used within a project into a 'config.h' file.
[source,python]
---------------
def configure(conf):
conf.check(
fragment = 'int main() { return 0; }\n',
define_name = 'FOO',
mandatory = True)
conf.write_config_header('config.h')
---------------
For performance reasons, the implicit dependency on the system headers is ignored by default. The following code may be used to enable this behaviour:
[source,python]
---------------
from waflib import c_preproc
c_preproc.go_absolute = True
---------------
Additional tools such as https://github.com/waf-project/waf/blob/master/waflib/extras/gccdeps.py[gccdeps] or https://github.com/waf-project/waf/blob/master/waflib/extras//dumbpreproc.py[dumbpreproc] provide alternate dependency scanners that can be faster in certain cases (boost).
NOTE: The Waf engine will detect if tasks generate headers necessary for the compilation and compute the build order accordingly. It may sometimes improve the performance of the scanner if the tasks creating headers provide the hint 'ext_out=[".h"]'.
==== Dependency debugging
The Waf preprocessor contains a specific debugging zone:
[source,shishell]
---------------
$ waf --zones=preproc
---------------
To display the dependencies obtained or missed, use the following:
[source,shishell]
---------------
$ waf --zones=deps
23:53:21 deps deps for src:///comp/waf/demos/qt4/src/window.cpp: <1>
The dependency computation is performed only when the files are not up-to-date, so these commands will display something only when there is a file to compile.
NOTE: The scanner is only called when C files or dependencies change. In the rare case of adding headers after a successful compilation, then it may be necessary to run 'waf clean build' to force a full scanning.
=== Library interaction (use)
==== Local libraries
The attribute 'use' enables the link against libraries (static or shared), or the inclusion of object files when the task generator referenced is not a library.
// cprog_use
[source,python]
---------------
def build(bld):
bld.stlib(
source = 'test_staticlib.c',
target = 'mylib',
name = 'stlib1') <1>
bld.program(
source = 'main.c',
target = 'app',
includes = '.',
use = ['stlib1']) <2>
---------------
<1> The name attribute must point at exactly one task generator
<2> The attribute 'use' contains the task generator names to use
In this example, the file 'app' will be re-created whenever 'mylib' changes (order and dependency). By using task generator names, the programs and libraries declarations may appear in any order and across scripts. For convenience, the name does not have to be defined, and will be pre-set from the target name:
[source,python]
---------------
def build(bld):
bld.stlib(
source = 'test_staticlib.c',
target = 'mylib')
bld.program(
source = 'main.c',
target = 'app',
includes = '.',
use = ['mylib'])
---------------
The 'use' processing also exhibits a recursive behaviour. Let's illustrate it by the following example:
// cprog_propagation
[source,python]
---------------
def build(bld):
bld.shlib(
source = 'a.c', <1>
target = 'lib1')
bld.stlib(
source = 'b.c',
use = 'cshlib', <2>
target = 'lib2')
bld.shlib(
source = 'c.c',
target = 'lib3',
use = 'lib1 lib2') <3>
bld.program( <4>
source = 'main.c',
target = 'app',
use = 'lib3')
---------------
<1> A simple shared library
<2> The 'cshlib' flags will be propagated to both the library and the program.
<3> 'lib3' uses both a shared library and a static library
<4> A program using 'lib3'
Because of the shared library dependency 'lib1' → 'lib2', the program 'app' should link against both 'lib1' and 'lib3', but not against 'lib2':
To sum up the two most important aspects of the 'use' attribute:
. The task generators may be created in any order and in different files, but must provide a unique name for the 'use' attribute
. The 'use' processing will iterate recursively over all the task generators involved, but the flags added depend on the target kind (shared/static libraries)
==== Special local libraries
===== Includes folders
The use keywork may point at special libraries that do not actually declare a target. For example, header-only libraries are commonly used to add specific include paths to several targets:
// cprog_incdirs
[source,python]
---------------
def build(bld):
bld(
includes = '. src',
export_includes = 'src', <1>
name = 'com_includes')
bld.stlib(
source = 'a.c',
target = 'shlib1',
use = 'com_includes') <2>
bld.program(
source = 'main.c',
target = 'app',
use = 'shlib1', <3>
)
---------------
<1> The 'includes' attribute is private, but 'export_includes' will be used by other task generators
<2> The paths added are relative to the other task generator
<3> The 'export_includes' will be propagated to other task generators
===== Object files
Here is how to enable specific compilation flags for particular files:
// cprog_objects
[source,python]
---------------
def build(bld):
bld.objects( <1>
source = 'test.c',
cflags = '-O3',
target = 'my_objs')
bld.shlib(
source = 'a.c',
cflags = '-O2', <2>
target = 'lib1',
use = 'my_objs') <3>
bld.program(
source = 'main.c',
target = 'test_c_program',
use = 'lib1') <4>
---------------
<1> Files will be compiled in c mode, but no program or library will be produced
<2> Different compilation flags may be used
<3> The objects will be added automatically in the link stage
<4> There is no object propagation to other programs or libraries to avoid duplicate symbol errors
WARNING: Like static libraries, object files are often abused to copy-paste binary code. Try to minimize the executables size by using shared libraries whenever possible.
===== Fake libraries
Local libraries will trigger a recompilation whenever they change. The methods 'read_shlib' and 'read_stlib' can be used to add this behaviour to external libraries or to binary files present in the project.
The methods will try to find files such as 'libm.so' or 'libm.dll' in the specified paths to compute the required paths and dependencies. In this example, the target 'app' will be re-created whenever '/usr/lib64/libm.so' changes. These libraries are propagated between task generators just like shared or static libraries declared locally.
==== Foreign libraries and flags
When an element in the attribute 'use' does not match a local library, it is assumed that it represents a system library, and the the required flags are present in the configuration set 'env'. This system enables the addition of several compilation and link flags at once, as in the following example:
// cprog_system
[source,python]
---------------
import sys
def options(opt):
opt.load('compiler_c')
def configure(conf):
conf.load('compiler_c')
conf.env.INCLUDES_TEST = ['/usr/include'] <1>
if sys.platform != 'win32': <2>
conf.env.DEFINES_TEST = ['TEST']
conf.env.CFLAGS_TEST = ['-O0'] <3>
conf.env.LIB_TEST = ['m']
conf.env.LIBPATH_TEST = ['/usr/lib']
conf.env.LINKFLAGS_TEST = ['-g']
conf.env.INCLUDES_TEST = ['/opt/gnome/include']
def build(bld):
mylib = bld.stlib(
source = 'test_staticlib.c',
target = 'teststaticlib',
use = 'TEST') <4>
if mylib.env.CC_NAME == 'gcc':
mylib.cxxflags = ['-O2'] <5>
---------------
<1> For portability reasons, it is recommended to use INCLUDES instead of giving flags of the form -I/include. Note that the INCLUDES use used by both c and c++
<2> Variables may be left undefined in platform-specific settings, yet the build scripts will remain identical.
<3> Declare a few variables during the configuration, the variables follow the convention VAR_NAME
<4> Add all the VAR_NAME corresponding to the _use variable_ NAME, which is 'TEST' in this example
<5> 'Model to avoid': setting the flags and checking for the configuration should be performed in the configuration section
The variables used for C/C++ are the following:
.Use variables and task generator attributes for C/C++
[options="header",cols="1,1,3"]
|=================
|Uselib variable | Attribute | Usage
|LIB |lib | list of sharedlibrary names to use, without prefix or extension
|LIBPATH |libpath | list of search path for shared libraries
|STLIB |stlib | list of static library names to use, without prefix or extension
|STLIBPATH|stlibpath| list of search path for static libraries
|LINKFLAGS|linkflags| list of link flags (use other variables whenever possible)
|RPATH |rpath | list of paths to hard-code into the binary during linking time
|CFLAGS |cflags | list of compilation flags for c files
|CXXFLAGS |cxxflags | list of compilation flags for c++ files
|DFLAGS |dflags | list of compilation flags for d files
|INCLUDES |includes | include paths
|CXXDEPS | | a variable/list to trigger c++ file recompilations when it changes
|CCDEPS | | same as above, for c
|LINKDEPS | | same as above, for the link tasks
|DEFINES |defines | list of defines in the form [`key=value', ...]
|FRAMEWORK|framework| list of frameworks to use
|FRAMEWORKPATH|frameworkpath| list of framework paths to use
|ARCH |arch | list of architectures in the form ['ppc', 'x86']
|=================
The variables may be left empty for later use, and will not cause errors. During the development, the configuration cache files (for example, _cache.py) may be modified from a text editor to try different configurations without forcing a whole project reconfiguration. The files affected will be rebuilt however.
=== Configuration helpers
==== Configuration tests
The method 'check' is used to detect parameters using a small build project. The main parameters are the following
. msg: title of the test to execute
. okmsg: message to display when the test succeeds
. errmsg: message to display when the test fails
. env: environment to use for the build (conf.env is used by default)
. compile_mode: 'cc' or 'cxx'
. define_name: add a define for the configuration header when the test succeeds (in most cases it is calculated automatically)
The errors raised are instances of 'waflib.Errors.ConfigurationError'. There are no return codes.
Besides the main parameters, the attributes from c/c++ task generators may be used. Here is a concrete example:
<1> Try to compile a program using the configuration header time.h, if present on the system, if the test is successful, the define HAVE_TIME_H will be added
<2> Try to compile a program with the function printf, adding the header stdio.h (the header_name may be a list of additional headers). All configuration tests are required by default (@conf methods) and will raise configuration exceptions. To conceal them, set the attribute 'mandatory' to False.
<3> Try to compile a piece of code, and if the test is successful, define the name boobah
<4> Modifications made to the task generator environment are not stored. When the test is successful and when the attribute uselib_store is provided, the names lib, cflags and defines will be converted into _use variables_ LIB_M, CFLAGS_M and DEFINES_M and the flag values are added to the configuration environment.
<5> Try to compile a simple c program against a library called 'linux', and reuse the previous parameters for libm by _use_
<6> Execute a simple program, collect the output, and put it in a define when successful
<7> The tests create a build with a single task generator. By passing the 'features' attribute directly it is possible to disable the compilation or to create more complicated configuration tests.
<8> After all the tests are executed, write a configuration header in the build directory (optional). The configuration header is used to limit the size of the command-line.
Here is an example of a +config.h+ produced with the previous test code:
[source,c]
---------------
/* Configuration header created by Waf - do not edit */
#ifndef _CONFIG_H_WAF
#define _CONFIG_H_WAF
#define HAVE_PRINTF 1
#define HAVE_TIME_H 1
#define boobah 1
#define booeah "4"
#endif /* _CONFIG_H_WAF */
---------------
The file +_cache.py+ will contain the following variables:
The methods 'conf.check' create a build context and a task generator internally. This means that the attributes 'includes', 'defines', 'cxxflags' may be used (not all shown here). Yet some tests can be require several targets at once. In order to faciliate this, a custom build function can be passed directly, for example:
conf.test(build_fun=build_latex_test, msg='Checking for UCS', okmsg='ok', errmsg='ucs.sty is missing install latex-extras')
---------------
==== Configuration headers
Adding lots of command-line define values increases the size of the command-line and makes it harder to review the flags when errors occur. Besides that, the defines passed on the command-line may fail unexpectedly with different compilers and command execution contexts. For example, define values containing quotes may be misinterpreted in Visual Studio response files. It is therefore a best practice to use configuration headers whenever possible.
Writing configuration headers can be performed using the following methods:
[source,python]
---------------
def configure(conf):
conf.define('NOLIBF', 1)
conf.undefine('NOLIBF')
conf.define('LIBF', 1)
conf.define('LIBF_VERSION', '1.0.2')
conf.write_config_header('config.h')
---------------
The code snipped will produce the following 'config.h' in the build directory:
[source,shishell]
---------------
build/
|-- c4che
| |-- build.config.py
| `-- _cache.py
|-- config.log
`-- config.h
---------------
The contents of the config.h for this example are:
[source,c]
---------------
/* Configuration header created by Waf - do not edit */
#ifndef _CONFIG_H_WAF
#define _CONFIG_H_WAF
/* #undef NOLIBF */
#define LIBF 1
#define LIBF_VERSION "1.0.2"
#endif /* _CONFIG_H_WAF */
---------------
NOTE: By default, the defines are moved from the command-line into the configuration header. This means that the attribute _conf.env.DEFINE_ is cleared by this operation. To prevent this behaviour, use 'conf.write_config_header(remove=False)'
==== Pkg-config
Instead of duplicating the configuration detection in all dependent projects, configuration files may be written when libraries are installed. To ease the interaction with build systems based on Make (cannot query databases or apis), small applications have been created for reading the cache files and to interpret the parameters (with names traditionally ending in '-config'): http://pkg-config.freedesktop.org/wiki/[pkg-config], wx-config, sdl-config, etc.
The method 'check_cfg' is provided to ease the interaction with these applications. Here are a few examples:
<2> Retrieve the module version for a package as a string. If there were no errors, 'PANGO_VERSION' is defined. It can be overridden with the attribute _uselib_store='MYPANGO'_.
<3> Check if the pango package is present, and define _HAVE_PANGO_ (calculated automatically from the package name)
<4> Beside defining _HAVE_MYPANGO_, extract and store the relevant flags to the _use variable_ MYPANGO (_LIB_MYPANGO_, _LIBPATH_MYPANGO_, etc)
<5> Like the previous test, but with pkg-config clauses to enforce a particular version number
<6> Display a custom message on the output. The attributes 'okmsg' and 'errmsg' represent the messages to display in case of success and error respectively
<7> Obtain the flags for sdl-config. The example is applicable for other configuration programs such as wx-config, pcre-config, etc
<8> Suppress the configuration error which is raised whenever the program to execute is not found or returns a non-zero exit status
Due to the amount of flags, the lack of standards between config applications, and to the compiler-dependent flags (-I for gcc, /I for msvc), the pkg-config output is parsed before setting the corresponding _use variables_ in a go. The function 'parse_flags(line, uselib, env)' in the Waf module c_config.py performs the flag extraction.
The outputs are written in the build directory into the file 'config.log':
[source,shishell]
------------------
# project configured on Tue Aug 31 17:30:21 2010 by
# waf {version} (abi 98, python 20605f0 on linux2)