rust/mk/tests.mk

1031 lines
35 KiB
Makefile
Raw Normal View History

# Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT
# file at the top-level directory of this distribution and at
# http://rust-lang.org/COPYRIGHT.
#
# Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
# http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
# <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
# option. This file may not be copied, modified, or distributed
# except according to those terms.
2013-02-05 23:14:58 +01:00
2011-05-01 22:18:52 +02:00
######################################################################
2013-02-05 23:14:58 +01:00
# Test variables
2011-05-01 22:18:52 +02:00
######################################################################
2013-02-05 23:14:58 +01:00
# The names of crates that must be tested
# libcore/libunicode tests are in a separate crate
DEPS_coretest :=
$(eval $(call RUST_CRATE,coretest))
TEST_TARGET_CRATES = $(filter-out core unicode,$(TARGET_CRATES)) coretest
TEST_DOC_CRATES = $(DOC_CRATES)
TEST_HOST_CRATES = $(filter-out rustc_typeck rustc_borrowck rustc_resolve rustc_trans,\
$(HOST_CRATES))
TEST_CRATES = $(TEST_TARGET_CRATES) $(TEST_HOST_CRATES)
2011-05-01 22:18:52 +02:00
2013-02-05 23:14:58 +01:00
######################################################################
# Environment configuration
######################################################################
The Big Test Suite Overhaul This replaces the make-based test runner with a set of Rust-based test runners. I believe that all existing functionality has been preserved. The primary objective is to dogfood the Rust test framework. A few main things happen here: 1) The run-pass/lib-* tests are all moved into src/test/stdtest. This is a standalone test crate intended for all standard library tests. It compiles to build/test/stdtest.stageN. 2) rustc now compiles into yet another build artifact, this one a test runner that runs any tests contained directly in the rustc crate. This allows much more fine-grained unit testing of the compiler. It compiles to build/test/rustctest.stageN. 3) There is a new custom test runner crate at src/test/compiletest that reproduces all the functionality for running the compile-fail, run-fail, run-pass and bench tests while integrating with Rust's test framework. It compiles to build/test/compiletest.stageN. 4) The build rules have been completely changed to use the new test runners, while also being less redundant, following the example of the recent stageN.mk rewrite. It adds two new features to the cfail/rfail/rpass/bench tests: 1) Tests can specify multiple 'error-pattern' directives which must be satisfied in order. 2) Tests can specify a 'compile-flags' directive which will make the test runner provide additional command line arguments to rustc. There are some downsides, the primary being that Rust has to be functioning pretty well just to run _any_ tests, which I imagine will be the source of some frustration when the entire test suite breaks. Will also cause some headaches during porting. Not having individual make rules, each rpass, etc test no longer remembers between runs whether it completed successfully. As a result, it's not possible to incrementally fix multiple tests by just running 'make check', fixing a test, and repeating without re-running all the tests contained in the test runner. Instead you can filter just the tests you want to run by using the TESTNAME environment variable. This also dispenses with the ability to run stage0 tests, but they tended to be broken more often than not anyway.
2011-07-13 04:01:09 +02:00
# The arguments to all test runners
ifdef TESTNAME
TESTARGS += $(TESTNAME)
endif
ifdef CHECK_IGNORED
The Big Test Suite Overhaul This replaces the make-based test runner with a set of Rust-based test runners. I believe that all existing functionality has been preserved. The primary objective is to dogfood the Rust test framework. A few main things happen here: 1) The run-pass/lib-* tests are all moved into src/test/stdtest. This is a standalone test crate intended for all standard library tests. It compiles to build/test/stdtest.stageN. 2) rustc now compiles into yet another build artifact, this one a test runner that runs any tests contained directly in the rustc crate. This allows much more fine-grained unit testing of the compiler. It compiles to build/test/rustctest.stageN. 3) There is a new custom test runner crate at src/test/compiletest that reproduces all the functionality for running the compile-fail, run-fail, run-pass and bench tests while integrating with Rust's test framework. It compiles to build/test/compiletest.stageN. 4) The build rules have been completely changed to use the new test runners, while also being less redundant, following the example of the recent stageN.mk rewrite. It adds two new features to the cfail/rfail/rpass/bench tests: 1) Tests can specify multiple 'error-pattern' directives which must be satisfied in order. 2) Tests can specify a 'compile-flags' directive which will make the test runner provide additional command line arguments to rustc. There are some downsides, the primary being that Rust has to be functioning pretty well just to run _any_ tests, which I imagine will be the source of some frustration when the entire test suite breaks. Will also cause some headaches during porting. Not having individual make rules, each rpass, etc test no longer remembers between runs whether it completed successfully. As a result, it's not possible to incrementally fix multiple tests by just running 'make check', fixing a test, and repeating without re-running all the tests contained in the test runner. Instead you can filter just the tests you want to run by using the TESTNAME environment variable. This also dispenses with the ability to run stage0 tests, but they tended to be broken more often than not anyway.
2011-07-13 04:01:09 +02:00
TESTARGS += --ignored
endif
The Big Test Suite Overhaul This replaces the make-based test runner with a set of Rust-based test runners. I believe that all existing functionality has been preserved. The primary objective is to dogfood the Rust test framework. A few main things happen here: 1) The run-pass/lib-* tests are all moved into src/test/stdtest. This is a standalone test crate intended for all standard library tests. It compiles to build/test/stdtest.stageN. 2) rustc now compiles into yet another build artifact, this one a test runner that runs any tests contained directly in the rustc crate. This allows much more fine-grained unit testing of the compiler. It compiles to build/test/rustctest.stageN. 3) There is a new custom test runner crate at src/test/compiletest that reproduces all the functionality for running the compile-fail, run-fail, run-pass and bench tests while integrating with Rust's test framework. It compiles to build/test/compiletest.stageN. 4) The build rules have been completely changed to use the new test runners, while also being less redundant, following the example of the recent stageN.mk rewrite. It adds two new features to the cfail/rfail/rpass/bench tests: 1) Tests can specify multiple 'error-pattern' directives which must be satisfied in order. 2) Tests can specify a 'compile-flags' directive which will make the test runner provide additional command line arguments to rustc. There are some downsides, the primary being that Rust has to be functioning pretty well just to run _any_ tests, which I imagine will be the source of some frustration when the entire test suite breaks. Will also cause some headaches during porting. Not having individual make rules, each rpass, etc test no longer remembers between runs whether it completed successfully. As a result, it's not possible to incrementally fix multiple tests by just running 'make check', fixing a test, and repeating without re-running all the tests contained in the test runner. Instead you can filter just the tests you want to run by using the TESTNAME environment variable. This also dispenses with the ability to run stage0 tests, but they tended to be broken more often than not anyway.
2011-07-13 04:01:09 +02:00
# Arguments to the cfail/rfail/rpass/bench tests
ifdef CFG_VALGRIND
CTEST_RUNTOOL = --runtool "$(CFG_VALGRIND)"
endif
ifdef PLEASE_BENCH
TESTARGS += --bench
The Big Test Suite Overhaul This replaces the make-based test runner with a set of Rust-based test runners. I believe that all existing functionality has been preserved. The primary objective is to dogfood the Rust test framework. A few main things happen here: 1) The run-pass/lib-* tests are all moved into src/test/stdtest. This is a standalone test crate intended for all standard library tests. It compiles to build/test/stdtest.stageN. 2) rustc now compiles into yet another build artifact, this one a test runner that runs any tests contained directly in the rustc crate. This allows much more fine-grained unit testing of the compiler. It compiles to build/test/rustctest.stageN. 3) There is a new custom test runner crate at src/test/compiletest that reproduces all the functionality for running the compile-fail, run-fail, run-pass and bench tests while integrating with Rust's test framework. It compiles to build/test/compiletest.stageN. 4) The build rules have been completely changed to use the new test runners, while also being less redundant, following the example of the recent stageN.mk rewrite. It adds two new features to the cfail/rfail/rpass/bench tests: 1) Tests can specify multiple 'error-pattern' directives which must be satisfied in order. 2) Tests can specify a 'compile-flags' directive which will make the test runner provide additional command line arguments to rustc. There are some downsides, the primary being that Rust has to be functioning pretty well just to run _any_ tests, which I imagine will be the source of some frustration when the entire test suite breaks. Will also cause some headaches during porting. Not having individual make rules, each rpass, etc test no longer remembers between runs whether it completed successfully. As a result, it's not possible to incrementally fix multiple tests by just running 'make check', fixing a test, and repeating without re-running all the tests contained in the test runner. Instead you can filter just the tests you want to run by using the TESTNAME environment variable. This also dispenses with the ability to run stage0 tests, but they tended to be broken more often than not anyway.
2011-07-13 04:01:09 +02:00
endif
# Arguments to the perf tests
ifdef CFG_PERF_TOOL
CTEST_PERF_RUNTOOL = --runtool "$(CFG_PERF_TOOL)"
endif
The Big Test Suite Overhaul This replaces the make-based test runner with a set of Rust-based test runners. I believe that all existing functionality has been preserved. The primary objective is to dogfood the Rust test framework. A few main things happen here: 1) The run-pass/lib-* tests are all moved into src/test/stdtest. This is a standalone test crate intended for all standard library tests. It compiles to build/test/stdtest.stageN. 2) rustc now compiles into yet another build artifact, this one a test runner that runs any tests contained directly in the rustc crate. This allows much more fine-grained unit testing of the compiler. It compiles to build/test/rustctest.stageN. 3) There is a new custom test runner crate at src/test/compiletest that reproduces all the functionality for running the compile-fail, run-fail, run-pass and bench tests while integrating with Rust's test framework. It compiles to build/test/compiletest.stageN. 4) The build rules have been completely changed to use the new test runners, while also being less redundant, following the example of the recent stageN.mk rewrite. It adds two new features to the cfail/rfail/rpass/bench tests: 1) Tests can specify multiple 'error-pattern' directives which must be satisfied in order. 2) Tests can specify a 'compile-flags' directive which will make the test runner provide additional command line arguments to rustc. There are some downsides, the primary being that Rust has to be functioning pretty well just to run _any_ tests, which I imagine will be the source of some frustration when the entire test suite breaks. Will also cause some headaches during porting. Not having individual make rules, each rpass, etc test no longer remembers between runs whether it completed successfully. As a result, it's not possible to incrementally fix multiple tests by just running 'make check', fixing a test, and repeating without re-running all the tests contained in the test runner. Instead you can filter just the tests you want to run by using the TESTNAME environment variable. This also dispenses with the ability to run stage0 tests, but they tended to be broken more often than not anyway.
2011-07-13 04:01:09 +02:00
CTEST_TESTARGS := $(TESTARGS)
2011-05-01 22:18:52 +02:00
The Big Test Suite Overhaul This replaces the make-based test runner with a set of Rust-based test runners. I believe that all existing functionality has been preserved. The primary objective is to dogfood the Rust test framework. A few main things happen here: 1) The run-pass/lib-* tests are all moved into src/test/stdtest. This is a standalone test crate intended for all standard library tests. It compiles to build/test/stdtest.stageN. 2) rustc now compiles into yet another build artifact, this one a test runner that runs any tests contained directly in the rustc crate. This allows much more fine-grained unit testing of the compiler. It compiles to build/test/rustctest.stageN. 3) There is a new custom test runner crate at src/test/compiletest that reproduces all the functionality for running the compile-fail, run-fail, run-pass and bench tests while integrating with Rust's test framework. It compiles to build/test/compiletest.stageN. 4) The build rules have been completely changed to use the new test runners, while also being less redundant, following the example of the recent stageN.mk rewrite. It adds two new features to the cfail/rfail/rpass/bench tests: 1) Tests can specify multiple 'error-pattern' directives which must be satisfied in order. 2) Tests can specify a 'compile-flags' directive which will make the test runner provide additional command line arguments to rustc. There are some downsides, the primary being that Rust has to be functioning pretty well just to run _any_ tests, which I imagine will be the source of some frustration when the entire test suite breaks. Will also cause some headaches during porting. Not having individual make rules, each rpass, etc test no longer remembers between runs whether it completed successfully. As a result, it's not possible to incrementally fix multiple tests by just running 'make check', fixing a test, and repeating without re-running all the tests contained in the test runner. Instead you can filter just the tests you want to run by using the TESTNAME environment variable. This also dispenses with the ability to run stage0 tests, but they tended to be broken more often than not anyway.
2011-07-13 04:01:09 +02:00
ifdef VERBOSE
CTEST_TESTARGS += --verbose
endif
# Setting locale ensures that gdb's output remains consistent.
# This prevents tests from failing with some locales (fixes #17423).
export LC_ALL=C
# If we're running perf then set this environment variable
# to put the benchmarks into 'hard mode'
ifeq ($(MAKECMDGOALS),perf)
export RUST_BENCH=1
endif
TEST_LOG_FILE=tmp/check-stage$(1)-T-$(2)-H-$(3)-$(4).log
TEST_OK_FILE=tmp/check-stage$(1)-T-$(2)-H-$(3)-$(4).ok
define DEF_TARGET_COMMANDS
ifdef CFG_UNIXY_$(1)
CFG_RUN_TEST_$(1)=$$(TARGET_RPATH_VAR$$(2)_T_$$(3)_H_$$(4)) \
$$(call CFG_RUN_$(1),,$$(CFG_VALGRIND) $$(1))
endif
ifdef CFG_WINDOWSY_$(1)
CFG_TESTLIB_$(1)=$$(CFG_BUILD_DIR)$$(2)/$$(strip \
$$(if $$(findstring stage0,$$(1)), \
stage0/$$(CFG_LIBDIR_RELATIVE), \
$$(if $$(findstring stage1,$$(1)), \
stage1/$$(CFG_LIBDIR_RELATIVE), \
$$(if $$(findstring stage2,$$(1)), \
stage2/$$(CFG_LIBDIR_RELATIVE), \
$$(if $$(findstring stage3,$$(1)), \
stage3/$$(CFG_LIBDIR_RELATIVE), \
)))))/rustlib/$$(CFG_BUILD)/lib
CFG_RUN_TEST_$(1)=$$(call CFG_RUN_$(1),$$(call CFG_TESTLIB_$(1),$$(1),$$(4)),$$(1))
endif
# Run the compiletest runner itself under valgrind
ifdef CTEST_VALGRIND
CFG_RUN_CTEST_$(1)=$$(RPATH_VAR$$(1)_T_$$(3)_H_$$(3)) \
$$(call CFG_RUN_TEST_$$(CFG_BUILD),$$(3),$$(4))
else
CFG_RUN_CTEST_$(1)=$$(RPATH_VAR$$(1)_T_$$(3)_H_$$(3)) \
$$(call CFG_RUN_$$(CFG_BUILD),$$(TLIB$$(1)_T_$$(3)_H_$$(3)),$$(2))
endif
endef
2013-10-21 11:18:21 +02:00
$(foreach target,$(CFG_TARGET), \
$(eval $(call DEF_TARGET_COMMANDS,$(target))))
2013-05-07 03:09:34 +02:00
# Target platform specific variables
# for arm-linux-androidabi
define DEF_ADB_DEVICE_STATUS
CFG_ADB_DEVICE_STATUS=$(1)
endef
2013-10-21 11:18:21 +02:00
$(foreach target,$(CFG_TARGET), \
$(if $(findstring $(target),"arm-linux-androideabi"), \
$(if $(findstring adb,$(CFG_ADB)), \
$(if $(findstring device,$(shell $(CFG_ADB) devices 2>/dev/null | grep -E '^[:_A-Za-z0-9-]+[[:blank:]]+device')), \
$(info check: android device attached) \
$(eval $(call DEF_ADB_DEVICE_STATUS, true)), \
$(info check: android device not attached) \
$(eval $(call DEF_ADB_DEVICE_STATUS, false)) \
), \
$(info check: adb not found) \
$(eval $(call DEF_ADB_DEVICE_STATUS, false)) \
), \
) \
)
ifeq ($(CFG_ADB_DEVICE_STATUS),true)
CFG_ADB_TEST_DIR=/data/tmp
$(info check: android device test dir $(CFG_ADB_TEST_DIR) ready \
$(shell $(CFG_ADB) remount 1>/dev/null) \
$(shell $(CFG_ADB) shell rm -r $(CFG_ADB_TEST_DIR) >/dev/null) \
$(shell $(CFG_ADB) shell mkdir $(CFG_ADB_TEST_DIR)) \
$(shell $(CFG_ADB) shell mkdir $(CFG_ADB_TEST_DIR)/tmp) \
$(shell $(CFG_ADB) push $(S)src/etc/adb_run_wrapper.sh $(CFG_ADB_TEST_DIR) 1>/dev/null) \
$(foreach crate,$(TARGET_CRATES), \
$(shell $(CFG_ADB) push $(TLIB2_T_arm-linux-androideabi_H_$(CFG_BUILD))/$(call CFG_LIB_GLOB_arm-linux-androideabi,$(crate)) \
$(CFG_ADB_TEST_DIR))) \
)
else
CFG_ADB_TEST_DIR=
endif
# $(1) - name of doc test
# $(2) - file of the test
define DOCTEST
DOC_NAMES := $$(DOC_NAMES) $(1)
DOCFILE_$(1) := $(2)
endef
$(foreach doc,$(DOCS), \
$(eval $(call DOCTEST,md-$(doc),$(S)src/doc/$(doc).md)))
$(foreach file,$(wildcard $(S)src/doc/trpl/*.md), \
$(eval $(call DOCTEST,$(file:$(S)src/doc/trpl/%.md=trpl-%),$(file))))
The Big Test Suite Overhaul This replaces the make-based test runner with a set of Rust-based test runners. I believe that all existing functionality has been preserved. The primary objective is to dogfood the Rust test framework. A few main things happen here: 1) The run-pass/lib-* tests are all moved into src/test/stdtest. This is a standalone test crate intended for all standard library tests. It compiles to build/test/stdtest.stageN. 2) rustc now compiles into yet another build artifact, this one a test runner that runs any tests contained directly in the rustc crate. This allows much more fine-grained unit testing of the compiler. It compiles to build/test/rustctest.stageN. 3) There is a new custom test runner crate at src/test/compiletest that reproduces all the functionality for running the compile-fail, run-fail, run-pass and bench tests while integrating with Rust's test framework. It compiles to build/test/compiletest.stageN. 4) The build rules have been completely changed to use the new test runners, while also being less redundant, following the example of the recent stageN.mk rewrite. It adds two new features to the cfail/rfail/rpass/bench tests: 1) Tests can specify multiple 'error-pattern' directives which must be satisfied in order. 2) Tests can specify a 'compile-flags' directive which will make the test runner provide additional command line arguments to rustc. There are some downsides, the primary being that Rust has to be functioning pretty well just to run _any_ tests, which I imagine will be the source of some frustration when the entire test suite breaks. Will also cause some headaches during porting. Not having individual make rules, each rpass, etc test no longer remembers between runs whether it completed successfully. As a result, it's not possible to incrementally fix multiple tests by just running 'make check', fixing a test, and repeating without re-running all the tests contained in the test runner. Instead you can filter just the tests you want to run by using the TESTNAME environment variable. This also dispenses with the ability to run stage0 tests, but they tended to be broken more often than not anyway.
2011-07-13 04:01:09 +02:00
######################################################################
# Main test targets
######################################################################
# The main testing target. Tests lots of stuff.
2014-07-22 04:26:20 +02:00
check: cleantmptestlogs cleantestlibs check-notidy tidy
# As above but don't bother running tidy.
check-notidy: cleantmptestlogs cleantestlibs all check-stage2
2013-02-05 23:14:58 +01:00
$(Q)$(CFG_PYTHON) $(S)src/etc/check-summary.py tmp/*.log
# A slightly smaller set of tests for smoke testing.
2013-02-05 23:14:58 +01:00
check-lite: cleantestlibs cleantmptestlogs \
$(foreach crate,$(TEST_TARGET_CRATES),check-stage2-$(crate)) \
check-stage2-rpass check-stage2-rpass-valgrind \
check-stage2-rfail check-stage2-cfail check-stage2-rmake
2013-02-05 23:14:58 +01:00
$(Q)$(CFG_PYTHON) $(S)src/etc/check-summary.py tmp/*.log
# Only check the 'reference' tests: rpass/cfail/rfail/rmake.
check-ref: cleantestlibs cleantmptestlogs check-stage2-rpass check-stage2-rpass-valgrind \
2014-02-14 12:34:18 +01:00
check-stage2-rfail check-stage2-cfail check-stage2-rmake
$(Q)$(CFG_PYTHON) $(S)src/etc/check-summary.py tmp/*.log
# Only check the docs.
2014-02-15 04:17:50 +01:00
check-docs: cleantestlibs cleantmptestlogs check-stage2-docs
$(Q)$(CFG_PYTHON) $(S)src/etc/check-summary.py tmp/*.log
# Some less critical tests that are not prone to breakage.
# Not run as part of the normal test suite, but tested by bors on checkin.
check-secondary: check-build-compiletest check-build-lexer-verifier check-lexer check-pretty
# check + check-secondary.
#
# Issue #17883: build check-secondary first so hidden dependencies in
# e.g. building compiletest are exercised (resolve those by adding
# deps to rules that need them; not by putting `check` first here).
check-all: check-secondary check
# Pretty-printing tests.
check-pretty: check-stage2-T-$(CFG_BUILD)-H-$(CFG_BUILD)-pretty-exec
define DEF_CHECK_BUILD_COMPILETEST_FOR_STAGE
check-stage$(1)-build-compiletest: $$(HBIN$(1)_H_$(CFG_BUILD))/compiletest$$(X_$(CFG_BUILD))
endef
$(foreach stage,$(STAGES), \
$(eval $(call DEF_CHECK_BUILD_COMPILETEST_FOR_STAGE,$(stage))))
check-build-compiletest: \
check-stage1-build-compiletest \
check-stage2-build-compiletest
.PHONY: cleantmptestlogs cleantestlibs
cleantmptestlogs:
$(Q)rm -f tmp/*.log
cleantestlibs:
2013-10-21 11:18:21 +02:00
$(Q)find $(CFG_BUILD)/test \
-name '*.[odasS]' -o \
-name '*.so' -o \
-name '*.dylib' -o \
-name '*.dll' -o \
-name '*.def' -o \
-name '*.bc' -o \
-name '*.dSYM' -o \
-name '*.libaux' -o \
-name '*.out' -o \
-name '*.err' -o \
2013-02-09 19:09:19 +01:00
-name '*.debugger.script' \
| xargs rm -rf
2013-02-05 23:14:58 +01:00
######################################################################
# Tidy
######################################################################
ifdef CFG_NOTIDY
The Big Test Suite Overhaul This replaces the make-based test runner with a set of Rust-based test runners. I believe that all existing functionality has been preserved. The primary objective is to dogfood the Rust test framework. A few main things happen here: 1) The run-pass/lib-* tests are all moved into src/test/stdtest. This is a standalone test crate intended for all standard library tests. It compiles to build/test/stdtest.stageN. 2) rustc now compiles into yet another build artifact, this one a test runner that runs any tests contained directly in the rustc crate. This allows much more fine-grained unit testing of the compiler. It compiles to build/test/rustctest.stageN. 3) There is a new custom test runner crate at src/test/compiletest that reproduces all the functionality for running the compile-fail, run-fail, run-pass and bench tests while integrating with Rust's test framework. It compiles to build/test/compiletest.stageN. 4) The build rules have been completely changed to use the new test runners, while also being less redundant, following the example of the recent stageN.mk rewrite. It adds two new features to the cfail/rfail/rpass/bench tests: 1) Tests can specify multiple 'error-pattern' directives which must be satisfied in order. 2) Tests can specify a 'compile-flags' directive which will make the test runner provide additional command line arguments to rustc. There are some downsides, the primary being that Rust has to be functioning pretty well just to run _any_ tests, which I imagine will be the source of some frustration when the entire test suite breaks. Will also cause some headaches during porting. Not having individual make rules, each rpass, etc test no longer remembers between runs whether it completed successfully. As a result, it's not possible to incrementally fix multiple tests by just running 'make check', fixing a test, and repeating without re-running all the tests contained in the test runner. Instead you can filter just the tests you want to run by using the TESTNAME environment variable. This also dispenses with the ability to run stage0 tests, but they tended to be broken more often than not anyway.
2011-07-13 04:01:09 +02:00
tidy:
else
ALL_CS := $(wildcard $(S)src/rt/*.cpp \
$(S)src/rt/*/*.cpp \
$(S)src/rt/*/*/*.cpp \
2013-07-30 14:40:52 +02:00
$(S)src/rustllvm/*.cpp)
ALL_CS := $(filter-out $(S)src/rt/miniz.cpp \
$(wildcard $(S)src/rt/hoedown/src/*.c) \
$(wildcard $(S)src/rt/hoedown/bin/*.c) \
,$(ALL_CS))
ALL_HS := $(wildcard $(S)src/rt/*.h \
$(S)src/rt/*/*.h \
$(S)src/rt/*/*/*.h \
2013-07-30 14:40:52 +02:00
$(S)src/rustllvm/*.h)
ALL_HS := $(filter-out $(S)src/rt/valgrind/valgrind.h \
$(S)src/rt/valgrind/memcheck.h \
$(S)src/rt/msvc/typeof.h \
$(S)src/rt/msvc/stdint.h \
$(S)src/rt/msvc/inttypes.h \
$(wildcard $(S)src/rt/hoedown/src/*.h) \
$(wildcard $(S)src/rt/hoedown/bin/*.h) \
,$(ALL_HS))
2013-02-05 23:14:58 +01:00
# Run the tidy script in multiple parts to avoid huge 'echo' commands
tidy:
@$(call E, check: formatting)
$(Q)find $(S)src -name '*.r[sc]' \
-and -not -regex '^$(S)src/jemalloc.*' \
-and -not -regex '^$(S)src/libuv.*' \
-and -not -regex '^$(S)src/llvm.*' \
-and -not -regex '^$(S)src/gyp.*' \
-and -not -regex '^$(S)src/libbacktrace.*' \
-print0 \
| xargs -0 -n 10 $(CFG_PYTHON) $(S)src/etc/tidy.py
$(Q)find $(S)src/etc -name '*.py' \
| xargs -n 10 $(CFG_PYTHON) $(S)src/etc/tidy.py
2014-02-13 17:42:52 +01:00
$(Q)find $(S)src/doc -name '*.js' \
| xargs -n 10 $(CFG_PYTHON) $(S)src/etc/tidy.py
$(Q)find $(S)src/etc -name '*.sh' \
| xargs -n 10 $(CFG_PYTHON) $(S)src/etc/tidy.py
$(Q)find $(S)src/etc -name '*.pl' \
| xargs -n 10 $(CFG_PYTHON) $(S)src/etc/tidy.py
$(Q)find $(S)src/etc -name '*.c' \
| xargs -n 10 $(CFG_PYTHON) $(S)src/etc/tidy.py
$(Q)find $(S)src/etc -name '*.h' \
| xargs -n 10 $(CFG_PYTHON) $(S)src/etc/tidy.py
$(Q)echo $(ALL_CS) \
| xargs -n 10 $(CFG_PYTHON) $(S)src/etc/tidy.py
$(Q)echo $(ALL_HS) \
| xargs -n 10 $(CFG_PYTHON) $(S)src/etc/tidy.py
2014-12-21 09:12:56 +01:00
$(Q)find $(S)src -type f -perm +a+x \
-not -name '*.rs' -and -not -name '*.py' \
-and -not -name '*.sh' \
| grep '^$(S)src/jemalloc' -v \
| grep '^$(S)src/libuv' -v \
| grep '^$(S)src/llvm' -v \
| grep '^$(S)src/rt/hoedown' -v \
| grep '^$(S)src/gyp' -v \
| grep '^$(S)src/etc' -v \
| grep '^$(S)src/doc' -v \
| grep '^$(S)src/compiler-rt' -v \
| grep '^$(S)src/libbacktrace' -v \
2014-12-12 03:05:05 +01:00
| grep '^$(S)src/rust-installer' -v \
| xargs $(CFG_PYTHON) $(S)src/etc/check-binaries.py
$(Q) $(CFG_PYTHON) $(S)src/etc/errorck.py $(S)src/
$(Q) $(CFG_PYTHON) $(S)src/etc/featureck.py $(S)src/
endif
2013-02-05 23:14:58 +01:00
2012-01-21 03:05:07 +01:00
######################################################################
2013-02-05 23:14:58 +01:00
# Sets of tests
2012-01-21 03:05:07 +01:00
######################################################################
2013-02-05 23:14:58 +01:00
define DEF_TEST_SETS
check-stage$(1)-T-$(2)-H-$(3)-exec: \
check-stage$(1)-T-$(2)-H-$(3)-rpass-exec \
check-stage$(1)-T-$(2)-H-$(3)-rfail-exec \
check-stage$(1)-T-$(2)-H-$(3)-cfail-exec \
2014-10-08 04:41:15 +02:00
check-stage$(1)-T-$(2)-H-$(3)-rpass-valgrind-exec \
check-stage$(1)-T-$(2)-H-$(3)-rpass-full-exec \
check-stage$(1)-T-$(2)-H-$(3)-cfail-full-exec \
check-stage$(1)-T-$(2)-H-$(3)-rmake-exec \
check-stage$(1)-T-$(2)-H-$(3)-crates-exec \
check-stage$(1)-T-$(2)-H-$(3)-doc-crates-exec \
check-stage$(1)-T-$(2)-H-$(3)-bench-exec \
check-stage$(1)-T-$(2)-H-$(3)-debuginfo-gdb-exec \
check-stage$(1)-T-$(2)-H-$(3)-debuginfo-lldb-exec \
check-stage$(1)-T-$(2)-H-$(3)-codegen-exec \
2013-02-05 23:14:58 +01:00
check-stage$(1)-T-$(2)-H-$(3)-doc-exec \
check-stage$(1)-T-$(2)-H-$(3)-pretty-exec
# Only test the compiler-dependent crates when the target is
# able to build a compiler (when the target triple is in the set of host triples)
2013-10-21 11:18:21 +02:00
ifneq ($$(findstring $(2),$$(CFG_HOST)),)
2013-02-05 23:14:58 +01:00
check-stage$(1)-T-$(2)-H-$(3)-crates-exec: \
$$(foreach crate,$$(TEST_CRATES), \
check-stage$(1)-T-$(2)-H-$(3)-$$(crate)-exec)
else
check-stage$(1)-T-$(2)-H-$(3)-crates-exec: \
$$(foreach crate,$$(TEST_TARGET_CRATES), \
check-stage$(1)-T-$(2)-H-$(3)-$$(crate)-exec)
endif
check-stage$(1)-T-$(2)-H-$(3)-doc-crates-exec: \
$$(foreach crate,$$(TEST_DOC_CRATES), \
check-stage$(1)-T-$(2)-H-$(3)-doc-crate-$$(crate)-exec)
2013-02-05 23:14:58 +01:00
check-stage$(1)-T-$(2)-H-$(3)-doc-exec: \
$$(foreach docname,$$(DOC_NAMES), \
check-stage$(1)-T-$(2)-H-$(3)-doc-$$(docname)-exec) \
2013-02-05 23:14:58 +01:00
check-stage$(1)-T-$(2)-H-$(3)-pretty-exec: \
check-stage$(1)-T-$(2)-H-$(3)-pretty-rpass-exec \
2014-10-08 04:41:15 +02:00
check-stage$(1)-T-$(2)-H-$(3)-pretty-rpass-valgrind-exec \
check-stage$(1)-T-$(2)-H-$(3)-pretty-rpass-full-exec \
check-stage$(1)-T-$(2)-H-$(3)-pretty-rfail-exec \
check-stage$(1)-T-$(2)-H-$(3)-pretty-bench-exec \
2013-02-05 23:14:58 +01:00
check-stage$(1)-T-$(2)-H-$(3)-pretty-pretty-exec
2012-01-21 03:05:07 +01:00
endef
2013-10-21 11:18:21 +02:00
$(foreach host,$(CFG_HOST), \
$(foreach target,$(CFG_TARGET), \
2013-02-05 23:14:58 +01:00
$(foreach stage,$(STAGES), \
$(eval $(call DEF_TEST_SETS,$(stage),$(target),$(host))))))
2012-01-21 03:05:07 +01:00
######################################################################
2013-02-05 23:14:58 +01:00
# Crate testing
######################################################################
2013-02-05 23:14:58 +01:00
define TEST_RUNNER
2012-01-18 01:45:22 +01:00
# If NO_REBUILD is set then break the dependencies on everything but
# the source files so we can test crates without rebuilding any of the
# parent crates.
ifeq ($(NO_REBUILD),)
TESTDEP_$(1)_$(2)_$(3)_$(4) = $$(SREQ$(1)_T_$(2)_H_$(3)) \
$$(foreach crate,$$(TARGET_CRATES), \
$$(TLIB$(1)_T_$(2)_H_$(3))/stamp.$$(crate)) \
$$(CRATE_FULLDEPS_$(1)_T_$(2)_H_$(3)_$(4))
else
TESTDEP_$(1)_$(2)_$(3)_$(4) = $$(RSINPUTS_$(4))
endif
$(3)/stage$(1)/test/$(4)test-$(2)$$(X_$(2)): CFG_COMPILER_HOST_TRIPLE = $(2)
$(3)/stage$(1)/test/$(4)test-$(2)$$(X_$(2)): \
$$(CRATEFILE_$(4)) \
$$(TESTDEP_$(1)_$(2)_$(3)_$(4))
@$$(call E, rustc: $$@)
$(Q)CFG_LLVM_LINKAGE_FILE=$$(LLVM_LINKAGE_PATH_$(3)) \
2014-11-26 02:03:03 +01:00
$$(subst @,,$$(STAGE$(1)_T_$(2)_H_$(3))) -o $$@ $$< --test \
-L "$$(RT_OUTPUT_DIR_$(2))" \
-L "$$(LLVM_LIBDIR_$(2))" \
$$(RUSTFLAGS_$(4))
2012-01-16 02:34:18 +01:00
2013-02-05 23:14:58 +01:00
endef
2013-10-21 11:18:21 +02:00
$(foreach host,$(CFG_HOST), \
$(eval $(foreach target,$(CFG_TARGET), \
2013-02-05 23:14:58 +01:00
$(eval $(foreach stage,$(STAGES), \
$(eval $(foreach crate,$(TEST_CRATES), \
$(eval $(call TEST_RUNNER,$(stage),$(target),$(host),$(crate))))))))))
2013-02-05 23:14:58 +01:00
define DEF_TEST_CRATE_RULES
check-stage$(1)-T-$(2)-H-$(3)-$(4)-exec: $$(call TEST_OK_FILE,$(1),$(2),$(3),$(4))
$$(call TEST_OK_FILE,$(1),$(2),$(3),$(4)): \
$(3)/stage$(1)/test/$(4)test-$(2)$$(X_$(2))
2012-01-16 02:34:18 +01:00
@$$(call E, run: $$<)
$$(Q)$$(call CFG_RUN_TEST_$(2),$$<,$(1),$(2),$(3)) $$(TESTARGS) \
--logfile $$(call TEST_LOG_FILE,$(1),$(2),$(3),$(4)) \
$$(call CRATE_TEST_EXTRA_ARGS,$(1),$(2),$(3),$(4)) \
&& touch $$@
2013-02-05 23:14:58 +01:00
endef
2012-01-16 02:34:18 +01:00
define DEF_TEST_CRATE_RULES_arm-linux-androideabi
check-stage$(1)-T-$(2)-H-$(3)-$(4)-exec: $$(call TEST_OK_FILE,$(1),$(2),$(3),$(4))
$$(call TEST_OK_FILE,$(1),$(2),$(3),$(4)): \
$(3)/stage$(1)/test/$(4)test-$(2)$$(X_$(2))
@$$(call E, run: $$< via adb)
$$(Q)$(CFG_ADB) push $$< $(CFG_ADB_TEST_DIR)
$$(Q)$(CFG_ADB) shell '(cd $(CFG_ADB_TEST_DIR); LD_LIBRARY_PATH=. \
./$$(notdir $$<) \
--logfile $(CFG_ADB_TEST_DIR)/check-stage$(1)-T-$(2)-H-$(3)-$(4).log \
$$(call CRATE_TEST_EXTRA_ARGS,$(1),$(2),$(3),$(4)) $(TESTARGS))' \
> tmp/check-stage$(1)-T-$(2)-H-$(3)-$(4).tmp
$$(Q)cat tmp/check-stage$(1)-T-$(2)-H-$(3)-$(4).tmp
$$(Q)touch tmp/check-stage$(1)-T-$(2)-H-$(3)-$(4).log
$$(Q)$(CFG_ADB) pull $(CFG_ADB_TEST_DIR)/check-stage$(1)-T-$(2)-H-$(3)-$(4).log tmp/
$$(Q)$(CFG_ADB) shell rm $(CFG_ADB_TEST_DIR)/check-stage$(1)-T-$(2)-H-$(3)-$(4).log
@if grep -q "result: ok" tmp/check-stage$(1)-T-$(2)-H-$(3)-$(4).tmp; \
then \
rm tmp/check-stage$(1)-T-$(2)-H-$(3)-$(4).tmp; \
touch $$@; \
else \
rm tmp/check-stage$(1)-T-$(2)-H-$(3)-$(4).tmp; \
exit 101; \
fi
endef
define DEF_TEST_CRATE_RULES_null
check-stage$(1)-T-$(2)-H-$(3)-$(4)-exec: $$(call TEST_OK_FILE,$(1),$(2),$(3),$(4))
$$(call TEST_OK_FILE,$(1),$(2),$(3),$(4)): \
$(3)/stage$(1)/test/$(4)test-$(2)$$(X_$(2))
@$$(call E, failing: no device for $$< )
false
endef
2013-10-21 11:18:21 +02:00
$(foreach host,$(CFG_HOST), \
$(foreach target,$(CFG_TARGET), \
2013-02-05 23:14:58 +01:00
$(foreach stage,$(STAGES), \
$(foreach crate, $(TEST_CRATES), \
2013-10-21 11:18:21 +02:00
$(if $(findstring $(target),$(CFG_BUILD)), \
$(eval $(call DEF_TEST_CRATE_RULES,$(stage),$(target),$(host),$(crate))), \
$(if $(findstring $(target),"arm-linux-androideabi"), \
$(if $(findstring $(CFG_ADB_DEVICE_STATUS),"true"), \
$(eval $(call DEF_TEST_CRATE_RULES_arm-linux-androideabi,$(stage),$(target),$(host),$(crate))), \
$(eval $(call DEF_TEST_CRATE_RULES_null,$(stage),$(target),$(host),$(crate))) \
), \
$(eval $(call DEF_TEST_CRATE_RULES,$(stage),$(target),$(host),$(crate))) \
2013-05-07 03:09:34 +02:00
))))))
2013-02-05 23:14:58 +01:00
######################################################################
# Rules for the compiletest tests (rpass, rfail, etc.)
######################################################################
RPASS_RS := $(wildcard $(S)src/test/run-pass/*.rs)
RPASS_VALGRIND_RS := $(wildcard $(S)src/test/run-pass-valgrind/*.rs)
2013-02-05 23:14:58 +01:00
RPASS_FULL_RS := $(wildcard $(S)src/test/run-pass-fulldeps/*.rs)
CFAIL_FULL_RS := $(wildcard $(S)src/test/compile-fail-fulldeps/*.rs)
2013-02-05 23:14:58 +01:00
RFAIL_RS := $(wildcard $(S)src/test/run-fail/*.rs)
CFAIL_RS := $(wildcard $(S)src/test/compile-fail/*.rs)
BENCH_RS := $(wildcard $(S)src/test/bench/*.rs)
2013-02-05 23:14:58 +01:00
PRETTY_RS := $(wildcard $(S)src/test/pretty/*.rs)
DEBUGINFO_GDB_RS := $(wildcard $(S)src/test/debuginfo/*.rs)
DEBUGINFO_LLDB_RS := $(wildcard $(S)src/test/debuginfo/*.rs)
CODEGEN_RS := $(wildcard $(S)src/test/codegen/*.rs)
CODEGEN_CC := $(wildcard $(S)src/test/codegen/*.cc)
2013-02-05 23:14:58 +01:00
# perf tests are the same as bench tests only they run under
# a performance monitor.
PERF_RS := $(wildcard $(S)src/test/bench/*.rs)
RPASS_TESTS := $(RPASS_RS)
RPASS_VALGRIND_TESTS := $(RPASS_VALGRIND_RS)
RPASS_FULL_TESTS := $(RPASS_FULL_RS)
CFAIL_FULL_TESTS := $(CFAIL_FULL_RS)
RFAIL_TESTS := $(RFAIL_RS)
CFAIL_TESTS := $(CFAIL_RS)
2013-02-05 23:14:58 +01:00
BENCH_TESTS := $(BENCH_RS)
PERF_TESTS := $(PERF_RS)
PRETTY_TESTS := $(PRETTY_RS)
DEBUGINFO_GDB_TESTS := $(DEBUGINFO_GDB_RS)
DEBUGINFO_LLDB_TESTS := $(DEBUGINFO_LLDB_RS)
CODEGEN_TESTS := $(CODEGEN_RS) $(CODEGEN_CC)
2012-06-03 06:30:26 +02:00
2013-02-05 23:14:58 +01:00
CTEST_SRC_BASE_rpass = run-pass
CTEST_BUILD_BASE_rpass = run-pass
CTEST_MODE_rpass = run-pass
CTEST_RUNTOOL_rpass = $(CTEST_RUNTOOL)
2012-06-03 06:30:26 +02:00
CTEST_SRC_BASE_rpass-valgrind = run-pass-valgrind
CTEST_BUILD_BASE_rpass-valgrind = run-pass-valgrind
CTEST_MODE_rpass-valgrind = run-pass-valgrind
CTEST_RUNTOOL_rpass-valgrind = $(CTEST_RUNTOOL)
CTEST_SRC_BASE_rpass-full = run-pass-fulldeps
CTEST_BUILD_BASE_rpass-full = run-pass-fulldeps
2013-02-05 23:14:58 +01:00
CTEST_MODE_rpass-full = run-pass
CTEST_RUNTOOL_rpass-full = $(CTEST_RUNTOOL)
CTEST_SRC_BASE_cfail-full = compile-fail-fulldeps
CTEST_BUILD_BASE_cfail-full = compile-fail-fulldeps
CTEST_MODE_cfail-full = compile-fail
CTEST_RUNTOOL_cfail-full = $(CTEST_RUNTOOL)
2013-02-05 23:14:58 +01:00
CTEST_SRC_BASE_rfail = run-fail
CTEST_BUILD_BASE_rfail = run-fail
CTEST_MODE_rfail = run-fail
CTEST_RUNTOOL_rfail = $(CTEST_RUNTOOL)
CTEST_SRC_BASE_cfail = compile-fail
CTEST_BUILD_BASE_cfail = compile-fail
CTEST_MODE_cfail = compile-fail
CTEST_RUNTOOL_cfail = $(CTEST_RUNTOOL)
CTEST_SRC_BASE_bench = bench
2013-02-05 23:14:58 +01:00
CTEST_BUILD_BASE_bench = bench
CTEST_MODE_bench = run-pass
CTEST_RUNTOOL_bench = $(CTEST_RUNTOOL)
CTEST_SRC_BASE_perf = bench
CTEST_BUILD_BASE_perf = perf
CTEST_MODE_perf = run-pass
CTEST_RUNTOOL_perf = $(CTEST_PERF_RUNTOOL)
CTEST_SRC_BASE_debuginfo-gdb = debuginfo
CTEST_BUILD_BASE_debuginfo-gdb = debuginfo-gdb
CTEST_MODE_debuginfo-gdb = debuginfo-gdb
CTEST_RUNTOOL_debuginfo-gdb = $(CTEST_RUNTOOL)
CTEST_SRC_BASE_debuginfo-lldb = debuginfo
CTEST_BUILD_BASE_debuginfo-lldb = debuginfo-lldb
CTEST_MODE_debuginfo-lldb = debuginfo-lldb
CTEST_RUNTOOL_debuginfo-lldb = $(CTEST_RUNTOOL)
2013-02-09 19:09:19 +01:00
CTEST_SRC_BASE_codegen = codegen
CTEST_BUILD_BASE_codegen = codegen
CTEST_MODE_codegen = codegen
CTEST_RUNTOOL_codegen = $(CTEST_RUNTOOL)
# CTEST_DISABLE_$(TEST_GROUP), if set, will cause the test group to be
# disabled and the associated message to be printed as a warning
# during attempts to run those tests.
ifeq ($(CFG_GDB),)
CTEST_DISABLE_debuginfo-gdb = "no gdb found"
endif
ifeq ($(CFG_LLDB),)
CTEST_DISABLE_debuginfo-lldb = "no lldb found"
endif
ifeq ($(CFG_CLANG),)
CTEST_DISABLE_codegen = "no clang found"
endif
ifneq ($(CFG_OSTYPE),apple-darwin)
CTEST_DISABLE_debuginfo-lldb = "lldb tests are only run on darwin"
endif
ifeq ($(CFG_OSTYPE),apple-darwin)
2014-11-30 15:07:36 +01:00
CTEST_DISABLE_debuginfo-gdb = "gdb on darwin needs root"
endif
# CTEST_DISABLE_NONSELFHOST_$(TEST_GROUP), if set, will cause that
# test group to be disabled *unless* the target is able to build a
# compiler (i.e. when the target triple is in the set of of host
# triples). The associated message will be printed as a warning
# during attempts to run those tests.
2013-02-05 23:14:58 +01:00
define DEF_CTEST_VARS
# All the per-stage build rules you might want to call from the
# command line.
#
# $(1) is the stage number
# $(2) is the target triple to test
# $(3) is the host triple to test
# Prerequisites for compiletest tests
TEST_SREQ$(1)_T_$(2)_H_$(3) = \
$$(HBIN$(1)_H_$(3))/compiletest$$(X_$(3)) \
2013-02-05 23:14:58 +01:00
$$(SREQ$(1)_T_$(2)_H_$(3))
2012-06-03 06:30:26 +02:00
2011-10-03 02:37:50 +02:00
# Rules for the cfail/rfail/rpass/bench/perf test runner
# The tests select when to use debug configuration on their own;
# remove directive, if present, from CFG_RUSTC_FLAGS (issue #7898).
CTEST_RUSTC_FLAGS := $$(subst --cfg ndebug,,$$(CFG_RUSTC_FLAGS))
# The tests cannot be optimized while the rest of the compiler is optimized, so
# filter out the optimization (if any) from rustc and then figure out if we need
# to be optimized
CTEST_RUSTC_FLAGS := $$(subst -O,,$$(CTEST_RUSTC_FLAGS))
ifndef CFG_DISABLE_OPTIMIZE_TESTS
CTEST_RUSTC_FLAGS += -O
endif
CTEST_COMMON_ARGS$(1)-T-$(2)-H-$(3) := \
--compile-lib-path $$(HLIB$(1)_H_$(3)) \
--run-lib-path $$(TLIB$(1)_T_$(2)_H_$(3)) \
--rustc-path $$(HBIN$(1)_H_$(3))/rustc$$(X_$(3)) \
--clang-path $(if $(CFG_CLANG),$(CFG_CLANG),clang) \
2013-10-21 11:18:21 +02:00
--llvm-bin-path $(CFG_LLVM_INST_DIR_$(CFG_BUILD))/bin \
--aux-base $$(S)src/test/auxiliary/ \
--stage-id stage$(1)-$(2) \
--target $(2) \
--host $(3) \
--gdb-version="$(CFG_GDB_VERSION)" \
--lldb-version="$(CFG_LLDB_VERSION)" \
--android-cross-path=$(CFG_ANDROID_CROSS_PATH) \
--adb-path=$(CFG_ADB) \
--adb-test-dir=$(CFG_ADB_TEST_DIR) \
--host-rustcflags "$(RUSTC_FLAGS_$(3)) $$(CTEST_RUSTC_FLAGS) -L $$(RT_OUTPUT_DIR_$(3))" \
--lldb-python-dir=$(CFG_LLDB_PYTHON_DIR) \
2014-02-11 22:51:08 +01:00
--target-rustcflags "$(RUSTC_FLAGS_$(2)) $$(CTEST_RUSTC_FLAGS) -L $$(RT_OUTPUT_DIR_$(2))" \
2011-11-21 22:11:40 +01:00
$$(CTEST_TESTARGS)
ifdef CFG_VALGRIND_RPASS
ifdef GOOD_VALGRIND_$(2)
$(info cfg: valgrind-path set to $(CFG_VALGRIND_RPASS))
CTEST_COMMON_ARGS$(1)-T-$(2)-H-$(3) += --valgrind-path "$(CFG_VALGRIND_RPASS)"
endif
endif
ifndef CFG_DISABLE_VALGRIND_RPASS
ifdef GOOD_VALGRIND_$(2)
CTEST_COMMON_ARGS$(1)-T-$(2)-H-$(3) += --force-valgrind
endif
endif
CTEST_DEPS_rpass_$(1)-T-$(2)-H-$(3) = $$(RPASS_TESTS)
CTEST_DEPS_rpass-valgrind_$(1)-T-$(2)-H-$(3) = $$(RPASS_VALGRIND_TESTS)
CTEST_DEPS_rpass-full_$(1)-T-$(2)-H-$(3) = $$(RPASS_FULL_TESTS) $$(CSREQ$(1)_T_$(3)_H_$(3)) $$(SREQ$(1)_T_$(2)_H_$(3))
CTEST_DEPS_cfail-full_$(1)-T-$(2)-H-$(3) = $$(CFAIL_FULL_TESTS) $$(CSREQ$(1)_T_$(3)_H_$(3)) $$(SREQ$(1)_T_$(2)_H_$(3))
CTEST_DEPS_rfail_$(1)-T-$(2)-H-$(3) = $$(RFAIL_TESTS)
CTEST_DEPS_cfail_$(1)-T-$(2)-H-$(3) = $$(CFAIL_TESTS)
CTEST_DEPS_bench_$(1)-T-$(2)-H-$(3) = $$(BENCH_TESTS)
CTEST_DEPS_perf_$(1)-T-$(2)-H-$(3) = $$(PERF_TESTS)
CTEST_DEPS_debuginfo-gdb_$(1)-T-$(2)-H-$(3) = $$(DEBUGINFO_GDB_TESTS)
CTEST_DEPS_debuginfo-lldb_$(1)-T-$(2)-H-$(3) = $$(DEBUGINFO_LLDB_TESTS) \
$(S)src/etc/lldb_batchmode.py \
$(S)src/etc/lldb_rust_formatters.py
CTEST_DEPS_codegen_$(1)-T-$(2)-H-$(3) = $$(CODEGEN_TESTS)
2011-11-21 22:11:40 +01:00
2013-02-05 23:14:58 +01:00
endef
2011-11-21 22:11:40 +01:00
2013-10-21 11:18:21 +02:00
$(foreach host,$(CFG_HOST), \
$(eval $(foreach target,$(CFG_TARGET), \
2013-02-05 23:14:58 +01:00
$(eval $(foreach stage,$(STAGES), \
$(eval $(call DEF_CTEST_VARS,$(stage),$(target),$(host))))))))
2011-11-21 22:11:40 +01:00
2013-02-05 23:14:58 +01:00
define DEF_RUN_COMPILETEST
2013-02-05 23:14:58 +01:00
CTEST_ARGS$(1)-T-$(2)-H-$(3)-$(4) := \
$$(CTEST_COMMON_ARGS$(1)-T-$(2)-H-$(3)) \
--src-base $$(S)src/test/$$(CTEST_SRC_BASE_$(4))/ \
2013-02-05 23:14:58 +01:00
--build-base $(3)/test/$$(CTEST_BUILD_BASE_$(4))/ \
--mode $$(CTEST_MODE_$(4)) \
$$(CTEST_RUNTOOL_$(4))
2011-11-21 22:11:40 +01:00
check-stage$(1)-T-$(2)-H-$(3)-$(4)-exec: $$(call TEST_OK_FILE,$(1),$(2),$(3),$(4))
# CTEST_DONT_RUN_$(1)-T-$(2)-H-$(3)-$(4)
# Goal: leave this variable as empty string if we should run the test.
# Otherwise, set it to the reason we are not running the test.
# (Encoded as a separate variable because GNU make does not have a
# good way to express OR on ifeq commands)
ifneq ($$(CTEST_DISABLE_$(4)),)
# Test suite is disabled for all configured targets.
CTEST_DONT_RUN_$(1)-T-$(2)-H-$(3)-$(4) := $$(CTEST_DISABLE_$(4))
else
# else, check if non-self-hosted target (i.e. target not-in hosts) ...
ifeq ($$(findstring $(2),$$(CFG_HOST)),)
# ... if so, then check if this test suite is disabled for non-selfhosts.
ifneq ($$(CTEST_DISABLE_NONSELFHOST_$(4)),)
# Test suite is disabled for this target.
CTEST_DONT_RUN_$(1)-T-$(2)-H-$(3)-$(4) := $$(CTEST_DISABLE_NONSELFHOST_$(4))
endif
endif
# Neither DISABLE nor DISABLE_NONSELFHOST is set ==> okay, run the test.
endif
ifeq ($$(CTEST_DONT_RUN_$(1)-T-$(2)-H-$(3)-$(4)),)
$$(call TEST_OK_FILE,$(1),$(2),$(3),$(4)): \
2013-02-05 23:14:58 +01:00
$$(TEST_SREQ$(1)_T_$(2)_H_$(3)) \
$$(CTEST_DEPS_$(4)_$(1)-T-$(2)-H-$(3))
@$$(call E, run $(4) [$(2)]: $$<)
$$(Q)$$(call CFG_RUN_CTEST_$(2),$(1),$$<,$(3)) \
2013-02-05 23:14:58 +01:00
$$(CTEST_ARGS$(1)-T-$(2)-H-$(3)-$(4)) \
--logfile $$(call TEST_LOG_FILE,$(1),$(2),$(3),$(4)) \
&& touch $$@
2011-11-21 22:11:40 +01:00
else
$$(call TEST_OK_FILE,$(1),$(2),$(3),$(4)):
@$$(call E, run $(4) [$(2)]: $$<)
@$$(call E, warning: tests disabled: $$(CTEST_DONT_RUN_$(1)-T-$(2)-H-$(3)-$(4)))
touch $$@
endif
2013-02-05 23:14:58 +01:00
endef
2011-11-22 07:45:14 +01:00
CTEST_NAMES = rpass rpass-valgrind rpass-full cfail-full rfail cfail bench perf debuginfo-gdb debuginfo-lldb codegen
2013-10-21 11:18:21 +02:00
$(foreach host,$(CFG_HOST), \
$(eval $(foreach target,$(CFG_TARGET), \
2013-02-05 23:14:58 +01:00
$(eval $(foreach stage,$(STAGES), \
$(eval $(foreach name,$(CTEST_NAMES), \
$(eval $(call DEF_RUN_COMPILETEST,$(stage),$(target),$(host),$(name))))))))))
2014-10-08 04:41:15 +02:00
PRETTY_NAMES = pretty-rpass pretty-rpass-valgrind pretty-rpass-full pretty-rfail pretty-bench pretty-pretty
2013-02-05 23:14:58 +01:00
PRETTY_DEPS_pretty-rpass = $(RPASS_TESTS)
2014-10-08 04:41:15 +02:00
PRETTY_DEPS_pretty-rpass-valgrind = $(RPASS_VALGRIND_TESTS)
2013-02-05 23:14:58 +01:00
PRETTY_DEPS_pretty-rpass-full = $(RPASS_FULL_TESTS)
PRETTY_DEPS_pretty-rfail = $(RFAIL_TESTS)
PRETTY_DEPS_pretty-bench = $(BENCH_TESTS)
PRETTY_DEPS_pretty-pretty = $(PRETTY_TESTS)
# The stage- and host-specific dependencies are for e.g. macro_crate_test which pulls in
# external crates.
PRETTY_DEPS$(1)_H_$(3)_pretty-rpass =
PRETTY_DEPS$(1)_H_$(3)_pretty-rpass-full = $$(HLIB$(1)_H_$(3))/stamp.syntax $$(HLIB$(1)_H_$(3))/stamp.rustc
PRETTY_DEPS$(1)_H_$(3)_pretty-rfail =
PRETTY_DEPS$(1)_H_$(3)_pretty-bench =
PRETTY_DEPS$(1)_H_$(3)_pretty-pretty =
2013-02-05 23:14:58 +01:00
PRETTY_DIRNAME_pretty-rpass = run-pass
2014-10-08 04:41:15 +02:00
PRETTY_DIRNAME_pretty-rpass-valgrind = run-pass-valgrind
PRETTY_DIRNAME_pretty-rpass-full = run-pass-fulldeps
2013-02-05 23:14:58 +01:00
PRETTY_DIRNAME_pretty-rfail = run-fail
PRETTY_DIRNAME_pretty-bench = bench
PRETTY_DIRNAME_pretty-pretty = pretty
define DEF_RUN_PRETTY_TEST
PRETTY_ARGS$(1)-T-$(2)-H-$(3)-$(4) := \
$$(CTEST_COMMON_ARGS$(1)-T-$(2)-H-$(3)) \
2013-02-05 23:14:58 +01:00
--src-base $$(S)src/test/$$(PRETTY_DIRNAME_$(4))/ \
--build-base $(3)/test/$$(PRETTY_DIRNAME_$(4))/ \
2011-11-22 07:45:14 +01:00
--mode pretty
check-stage$(1)-T-$(2)-H-$(3)-$(4)-exec: $$(call TEST_OK_FILE,$(1),$(2),$(3),$(4))
$$(call TEST_OK_FILE,$(1),$(2),$(3),$(4)): \
$$(TEST_SREQ$(1)_T_$(2)_H_$(3)) \
$$(PRETTY_DEPS_$(4)) \
$$(PRETTY_DEPS$(1)_H_$(3)_$(4))
@$$(call E, run pretty-rpass [$(2)]: $$<)
$$(Q)$$(call CFG_RUN_CTEST_$(2),$(1),$$<,$(3)) \
2013-02-05 23:14:58 +01:00
$$(PRETTY_ARGS$(1)-T-$(2)-H-$(3)-$(4)) \
--logfile $$(call TEST_LOG_FILE,$(1),$(2),$(3),$(4)) \
&& touch $$@
2011-11-22 07:45:14 +01:00
2013-02-05 23:14:58 +01:00
endef
2011-11-22 07:45:14 +01:00
2013-10-21 11:18:21 +02:00
$(foreach host,$(CFG_HOST), \
$(foreach target,$(CFG_TARGET), \
2013-02-05 23:14:58 +01:00
$(foreach stage,$(STAGES), \
$(foreach pretty-name,$(PRETTY_NAMES), \
$(eval $(call DEF_RUN_PRETTY_TEST,$(stage),$(target),$(host),$(pretty-name)))))))
2012-01-21 03:05:07 +01:00
######################################################################
# Crate & freestanding documentation tests
######################################################################
define DEF_RUSTDOC
RUSTDOC_EXE_$(1)_T_$(2)_H_$(3) := $$(HBIN$(1)_H_$(3))/rustdoc$$(X_$(3))
RUSTDOC_$(1)_T_$(2)_H_$(3) := $$(RPATH_VAR$(1)_T_$(2)_H_$(3)) $$(RUSTDOC_EXE_$(1)_T_$(2)_H_$(3))
endef
$(foreach host,$(CFG_HOST), \
$(foreach target,$(CFG_TARGET), \
$(foreach stage,$(STAGES), \
$(eval $(call DEF_RUSTDOC,$(stage),$(target),$(host))))))
# Freestanding
define DEF_DOC_TEST
check-stage$(1)-T-$(2)-H-$(3)-doc-$(4)-exec: $$(call TEST_OK_FILE,$(1),$(2),$(3),doc-$(4))
# If NO_REBUILD is set then break the dependencies on everything but
# the source files so we can test documentation without rebuilding
# rustdoc etc.
ifeq ($(NO_REBUILD),)
DOCTESTDEP_$(1)_$(2)_$(3)_$(4) = \
$$(DOCFILE_$(4)) \
$$(TEST_SREQ$(1)_T_$(2)_H_$(3)) \
$$(RUSTDOC_EXE_$(1)_T_$(2)_H_$(3))
else
DOCTESTDEP_$(1)_$(2)_$(3)_$(4) = $$(DOCFILE_$(4))
endif
ifeq ($(2),$$(CFG_BUILD))
$$(call TEST_OK_FILE,$(1),$(2),$(3),doc-$(4)): $$(DOCTESTDEP_$(1)_$(2)_$(3)_$(4))
@$$(call E, run doc-$(4) [$(2)])
$$(Q)$$(RUSTDOC_$(1)_T_$(2)_H_$(3)) --cfg dox --test $$< \
--test-args "$$(TESTARGS)" && touch $$@
else
$$(call TEST_OK_FILE,$(1),$(2),$(3),doc-$(4)):
touch $$@
endif
2013-02-05 23:14:58 +01:00
endef
2013-10-21 11:18:21 +02:00
$(foreach host,$(CFG_HOST), \
$(foreach target,$(CFG_TARGET), \
2013-02-05 23:14:58 +01:00
$(foreach stage,$(STAGES), \
$(foreach docname,$(DOC_NAMES), \
$(eval $(call DEF_DOC_TEST,$(stage),$(target),$(host),$(docname)))))))
# Crates
define DEF_CRATE_DOC_TEST
# If NO_REBUILD is set then break the dependencies on everything but
# the source files so we can test crate documentation without
# rebuilding any of the parent crates.
ifeq ($(NO_REBUILD),)
CRATEDOCTESTDEP_$(1)_$(2)_$(3)_$(4) = \
$$(TEST_SREQ$(1)_T_$(2)_H_$(3)) \
$$(CRATE_FULLDEPS_$(1)_T_$(2)_H_$(3)_$(4)) \
$$(RUSTDOC_EXE_$(1)_T_$(2)_H_$(3))
else
CRATEDOCTESTDEP_$(1)_$(2)_$(3)_$(4) = $$(RSINPUTS_$(4))
endif
check-stage$(1)-T-$(2)-H-$(3)-doc-crate-$(4)-exec: \
$$(call TEST_OK_FILE,$(1),$(2),$(3),doc-crate-$(4))
ifeq ($(2),$$(CFG_BUILD))
$$(call TEST_OK_FILE,$(1),$(2),$(3),doc-crate-$(4)): $$(CRATEDOCTESTDEP_$(1)_$(2)_$(3)_$(4))
@$$(call E, run doc-crate-$(4) [$(2)])
$$(Q)CFG_LLVM_LINKAGE_FILE=$$(LLVM_LINKAGE_PATH_$(3)) \
2014-11-26 02:03:03 +01:00
$$(RUSTDOC_$(1)_T_$(2)_H_$(3)) --test --cfg dox \
$$(CRATEFILE_$(4)) --test-args "$$(TESTARGS)" && touch $$@
else
$$(call TEST_OK_FILE,$(1),$(2),$(3),doc-crate-$(4)):
touch $$@
endif
endef
$(foreach host,$(CFG_HOST), \
$(foreach target,$(CFG_TARGET), \
$(foreach stage,$(STAGES), \
$(foreach crate,$(TEST_DOC_CRATES), \
$(eval $(call DEF_CRATE_DOC_TEST,$(stage),$(target),$(host),$(crate)))))))
2013-02-05 23:14:58 +01:00
######################################################################
# Shortcut rules
######################################################################
2013-02-05 23:14:58 +01:00
TEST_GROUPS = \
crates \
$(foreach crate,$(TEST_CRATES),$(crate)) \
$(foreach crate,$(TEST_DOC_CRATES),doc-crate-$(crate)) \
2013-02-05 23:14:58 +01:00
rpass \
rpass-valgrind \
2013-02-05 23:14:58 +01:00
rpass-full \
cfail-full \
2013-02-05 23:14:58 +01:00
rfail \
cfail \
bench \
perf \
rmake \
debuginfo-gdb \
debuginfo-lldb \
codegen \
2013-02-05 23:14:58 +01:00
doc \
$(foreach docname,$(DOC_NAMES),doc-$(docname)) \
2013-02-05 23:14:58 +01:00
pretty \
pretty-rpass \
2014-10-08 04:41:15 +02:00
pretty-rpass-valgrind \
2013-02-05 23:14:58 +01:00
pretty-rpass-full \
pretty-rfail \
pretty-bench \
pretty-pretty \
$(NULL)
define DEF_CHECK_FOR_STAGE_AND_TARGET_AND_HOST
check-stage$(1)-T-$(2)-H-$(3): check-stage$(1)-T-$(2)-H-$(3)-exec
endef
2013-02-05 23:14:58 +01:00
$(foreach stage,$(STAGES), \
2013-10-21 11:18:21 +02:00
$(foreach target,$(CFG_TARGET), \
$(foreach host,$(CFG_HOST), \
2013-02-05 23:14:58 +01:00
$(eval $(call DEF_CHECK_FOR_STAGE_AND_TARGET_AND_HOST,$(stage),$(target),$(host))))))
2013-02-05 23:14:58 +01:00
define DEF_CHECK_FOR_STAGE_AND_TARGET_AND_HOST_AND_GROUP
check-stage$(1)-T-$(2)-H-$(3)-$(4): check-stage$(1)-T-$(2)-H-$(3)-$(4)-exec
endef
2012-01-21 03:05:07 +01:00
2013-02-05 23:14:58 +01:00
$(foreach stage,$(STAGES), \
2013-10-21 11:18:21 +02:00
$(foreach target,$(CFG_TARGET), \
$(foreach host,$(CFG_HOST), \
2013-02-05 23:14:58 +01:00
$(foreach group,$(TEST_GROUPS), \
$(eval $(call DEF_CHECK_FOR_STAGE_AND_TARGET_AND_HOST_AND_GROUP,$(stage),$(target),$(host),$(group)))))))
2013-02-05 23:14:58 +01:00
define DEF_CHECK_FOR_STAGE
2013-10-21 11:18:21 +02:00
check-stage$(1): check-stage$(1)-H-$$(CFG_BUILD)
check-stage$(1)-H-all: $$(foreach target,$$(CFG_TARGET), \
2013-02-05 23:14:58 +01:00
check-stage$(1)-H-$$(target))
endef
2013-02-05 23:14:58 +01:00
$(foreach stage,$(STAGES), \
$(eval $(call DEF_CHECK_FOR_STAGE,$(stage))))
2013-02-05 23:14:58 +01:00
define DEF_CHECK_FOR_STAGE_AND_GROUP
2013-10-21 11:18:21 +02:00
check-stage$(1)-$(2): check-stage$(1)-H-$$(CFG_BUILD)-$(2)
check-stage$(1)-H-all-$(2): $$(foreach target,$$(CFG_TARGET), \
2013-02-05 23:14:58 +01:00
check-stage$(1)-H-$$(target)-$(2))
endef
2013-02-05 23:14:58 +01:00
$(foreach stage,$(STAGES), \
$(foreach group,$(TEST_GROUPS), \
$(eval $(call DEF_CHECK_FOR_STAGE_AND_GROUP,$(stage),$(group)))))
2013-02-05 23:14:58 +01:00
define DEF_CHECK_FOR_STAGE_AND_HOSTS
2013-10-21 11:18:21 +02:00
check-stage$(1)-H-$(2): $$(foreach target,$$(CFG_TARGET), \
2013-02-05 23:14:58 +01:00
check-stage$(1)-T-$$(target)-H-$(2))
The Big Test Suite Overhaul This replaces the make-based test runner with a set of Rust-based test runners. I believe that all existing functionality has been preserved. The primary objective is to dogfood the Rust test framework. A few main things happen here: 1) The run-pass/lib-* tests are all moved into src/test/stdtest. This is a standalone test crate intended for all standard library tests. It compiles to build/test/stdtest.stageN. 2) rustc now compiles into yet another build artifact, this one a test runner that runs any tests contained directly in the rustc crate. This allows much more fine-grained unit testing of the compiler. It compiles to build/test/rustctest.stageN. 3) There is a new custom test runner crate at src/test/compiletest that reproduces all the functionality for running the compile-fail, run-fail, run-pass and bench tests while integrating with Rust's test framework. It compiles to build/test/compiletest.stageN. 4) The build rules have been completely changed to use the new test runners, while also being less redundant, following the example of the recent stageN.mk rewrite. It adds two new features to the cfail/rfail/rpass/bench tests: 1) Tests can specify multiple 'error-pattern' directives which must be satisfied in order. 2) Tests can specify a 'compile-flags' directive which will make the test runner provide additional command line arguments to rustc. There are some downsides, the primary being that Rust has to be functioning pretty well just to run _any_ tests, which I imagine will be the source of some frustration when the entire test suite breaks. Will also cause some headaches during porting. Not having individual make rules, each rpass, etc test no longer remembers between runs whether it completed successfully. As a result, it's not possible to incrementally fix multiple tests by just running 'make check', fixing a test, and repeating without re-running all the tests contained in the test runner. Instead you can filter just the tests you want to run by using the TESTNAME environment variable. This also dispenses with the ability to run stage0 tests, but they tended to be broken more often than not anyway.
2011-07-13 04:01:09 +02:00
endef
2013-02-05 23:14:58 +01:00
$(foreach stage,$(STAGES), \
2013-10-21 11:18:21 +02:00
$(foreach host,$(CFG_HOST), \
2013-02-05 23:14:58 +01:00
$(eval $(call DEF_CHECK_FOR_STAGE_AND_HOSTS,$(stage),$(host)))))
define DEF_CHECK_FOR_STAGE_AND_HOSTS_AND_GROUP
2013-10-21 11:18:21 +02:00
check-stage$(1)-H-$(2)-$(3): $$(foreach target,$$(CFG_TARGET), \
2013-02-05 23:14:58 +01:00
check-stage$(1)-T-$$(target)-H-$(2)-$(3))
endef
$(foreach stage,$(STAGES), \
2013-10-21 11:18:21 +02:00
$(foreach host,$(CFG_HOST), \
2013-02-05 23:14:58 +01:00
$(foreach group,$(TEST_GROUPS), \
$(eval $(call DEF_CHECK_FOR_STAGE_AND_HOSTS_AND_GROUP,$(stage),$(host),$(group))))))
2014-02-15 04:17:50 +01:00
define DEF_CHECK_DOC_FOR_STAGE
check-stage$(1)-docs: $$(foreach docname,$$(DOC_NAMES), \
2014-02-15 04:17:50 +01:00
check-stage$(1)-T-$$(CFG_BUILD)-H-$$(CFG_BUILD)-doc-$$(docname)) \
$$(foreach crate,$$(TEST_DOC_CRATES), \
check-stage$(1)-T-$$(CFG_BUILD)-H-$$(CFG_BUILD)-doc-crate-$$(crate))
2014-02-15 04:17:50 +01:00
endef
$(foreach stage,$(STAGES), \
$(eval $(call DEF_CHECK_DOC_FOR_STAGE,$(stage))))
define DEF_CHECK_CRATE
check-$(1): check-stage2-T-$$(CFG_BUILD)-H-$$(CFG_BUILD)-$(1)-exec
endef
$(foreach crate,$(TEST_CRATES), \
$(eval $(call DEF_CHECK_CRATE,$(crate))))
2011-11-29 16:07:25 +01:00
######################################################################
# RMAKE rules
2011-11-29 16:07:25 +01:00
######################################################################
RMAKE_TESTS := $(shell ls -d $(S)src/test/run-make/*/)
RMAKE_TESTS := $(RMAKE_TESTS:$(S)src/test/run-make/%/=%)
define DEF_RMAKE_FOR_T_H
# $(1) the stage
# $(2) target triple
# $(3) host triple
ifeq ($(2)$(3),$$(CFG_BUILD)$$(CFG_BUILD))
check-stage$(1)-T-$(2)-H-$(3)-rmake-exec: \
$$(call TEST_OK_FILE,$(1),$(2),$(3),rmake)
$$(call TEST_OK_FILE,$(1),$(2),$(3),rmake): \
$$(RMAKE_TESTS:%=$(3)/test/run-make/%-$(1)-T-$(2)-H-$(3).ok)
@touch $$@
$(3)/test/run-make/%-$(1)-T-$(2)-H-$(3).ok: \
$(S)src/test/run-make/%/Makefile \
$$(CSREQ$(1)_T_$(2)_H_$(3))
@rm -rf $(3)/test/run-make/$$*
@mkdir -p $(3)/test/run-make/$$*
$$(Q)$$(CFG_PYTHON) $(S)src/etc/maketest.py $$(dir $$<) \
$$(MAKE) \
$$(HBIN$(1)_H_$(3))/rustc$$(X_$(3)) \
2013-11-29 03:03:38 +01:00
$(3)/test/run-make/$$* \
"$$(CC_$(3)) $$(CFG_GCCISH_CFLAGS_$(3))" \
$$(HBIN$(1)_H_$(3))/rustdoc$$(X_$(3)) \
"$$(TESTNAME)" \
Refactoring: Introduce distinct host and target rpath var setters. Two line summary: Distinguish HOST_RPATH and TARGET_RPATH; added RPATH_LINK_SEARCH; skip tests broken in stage1; general cleanup. `HOST_RPATH_VAR$(1)_T_$(2)_H_$(3)` and `TARGET_RPATH_VAR$(1)_T_$(2)_H_$(3)` both match the format of the old `RPATH_VAR$(1)_T_$(2)_H_$(3)` (which is still being set the same way that it was before, to one of either HOST/TARGET depending on what stage we are building). Namely, the format is <XXX>_RPATH_VAR = "<LD_LIB_PATH_ENVVAR>=<COLON_SEP_PATH_ENTRIES>" What this commit does: * Pass both of the (newly introduced) HOST and TARGET rpath setup vars to `maketest.py` * Update `maketest.py` to no longer update the LD_LIBRARY_PATH itself Instead, it passes along the HOST and TARGET rpath setup vars in environment variables `HOST_RPATH_ENV` and `TARGET_RPATH_ENV` * Also, pass the current stage number to maketest.py; it in turn passes it (via an env var) to run-make tests. This allows the run-make tests to selectively change behavior (e.g. turn themselves off) to deal with incompatibilities with e.g. stage1. * Cleanup: Distinguish in tools.mk between the command to run (`RUN`) and the file to generate to drive that command (`RUN_BINFILE`). The main thing this enables is that `RUN` can now setup the `TARGET_RPATH_ENV` without having to dirty up the runner code in each of the `run-make` Makefiles. * Cleanup: Factored out commands to delete dylib/rlib into REMOVE_DYLIBS/REMOVE_RLIBS. There were places where we were only calling `rm $(call DYLIB,foo)` even though we really needed to get rid of the whole glob (at least based on alex's findings on #13753 that removing the symlink does not suffice). Therefore rather than peppering the code with the awkward `rm $(TMPDIR)/$(call DYLIB_GLOB,foo)`, I instead introduced a common `REMOVE_DYLIBS` user function that expands into that when called. After I adding an analogous `REMOVE_RLIBS`, I changed all of the existing calls that rm dylibs or rlibs to use these routines instead. Note that the latter is not a true refactoring since I may have changed cases where it was our intent to only remove the sym-link. (But if that is the case, then we need to more deeply investigate alex's findings on #13753 where the system was still dynamically loading up the non-symlinked libraries that it finds on the load path.) * Added RPATH_LINK_SEARCH command and use it on Linux. On some platforms, namely Linux, when you have libboot.so that has its internal rpath set (to e.g. $(ORIGIN)/path/to/HOSTDIR), the linker still complains when you do the link step and it does not know where to find libraries that libboot.so depends upon that live in HOSTDIR (think e.g. librustuv.so). As far as I can tell, the GNU linker will consult the LD_LIBRARY_PATH as part of the linking process to find such libraries. But if you want to be more careful and not override LD_LIBRARY_PATH for the `gcc` invocation, then you need some other way to tell the linker where it can find the libraries that libboot.so needs. The solution to this on Linux is the `-Wl,-rpath-link` command line option. However, this command line option does not exist on Mac OS X, (which appears to be figuring out how to resolve the libboot.dylib dependency by some other means, perhaps by consulting the rpath setting within libboot.dylib). So, in order to abstract over this distinction, I added the RPATH_LINK_SEARCH macro to the run-make infrastructure and added calls to it where necessary to get Linux working. On architectures other than Linux, the macro expands to nothing. * Disable miscellaneous tests atop stage1. * An especially interesting instance of the previous bullet point: Excuse regex from doing rustdoc tests atop stage1. This was a (nearly-) final step to get `make check-stage1` working again. The use of a special-case check for regex here is ugly but is analogous other similar checks for regex such as the one that landed in PR #13844. The way this is written, the user will get a reminder that doc-crate-regex is being skipped whenever their rules attempt to do the crate documentation tests. This is deliberate: I want people running `make check-stage1` to be reminded about which cases are being skipped. (But if such echo noise is considered offensive, it can obviously be removed.) * Got windows working with the above changes. This portion of the commit is a cleanup revision of the (previously mentioned on try builds) re-architecting of how the LD_LIBRARY_PATH setup and extension is handled in order to accommodate Windows' (1.) use of `$PATH` for that purpose and (2.) use of spaces in `$PATH` entries (problematic for make and for interoperation with tools at the shell). * In addition, since the code has been rearchitected to pass the HOST_RPATH_DIR/TARGET_RPATH_DIR rather than a whole sh environment-variable setting command, there is no need to for the convert_path_spec calls in maketest.py, which in fact were put in place to placate Windows but were now causing the Windows builds to fail. Instead we just convert the paths to absolute paths just like all of the other path arguments. Also, note for makefile hackers: apparently you cannot quote operands to `ifeq` in Makefile (or at least, you need to be careful about adding them, e.g. to only one side).
2014-04-25 18:22:23 +02:00
$$(LD_LIBRARY_PATH_ENV_NAME$(1)_T_$(2)_H_$(3)) \
"$$(LD_LIBRARY_PATH_ENV_HOSTDIR$(1)_T_$(2)_H_$(3))" \
"$$(LD_LIBRARY_PATH_ENV_TARGETDIR$(1)_T_$(2)_H_$(3))" \
$(1) \
$$(S)
@touch $$@
else
# FIXME #11094 - The above rule doesn't work right for multiple targets
check-stage$(1)-T-$(2)-H-$(3)-rmake-exec:
@true
endif
endef
$(foreach stage,$(STAGES), \
$(foreach target,$(CFG_TARGET), \
$(foreach host,$(CFG_HOST), \
$(eval $(call DEF_RMAKE_FOR_T_H,$(stage),$(target),$(host))))))