Commit Graph

21 Commits

Author SHA1 Message Date
Joseph Myers 04277e02d7 Update copyright dates with scripts/update-copyrights.
* All files with FSF copyright notices: Update copyright dates
	using scripts/update-copyrights.
	* locale/programs/charmap-kw.h: Regenerated.
	* locale/programs/locfile-kw.h: Likewise.
2019-01-01 00:11:28 +00:00
Siddhesh Poyarekar 44727aec4f [benchtests] Add mandatory attributes to workload tests
Add the duration and iterations attributes to the workloads tests to
make the json schema parser happy

	* benchtests/bench-skeleton.c (main): Add duration and
	iterations attributes.
2018-08-11 18:55:07 +05:30
Joseph Myers 688903eb3e Update copyright dates with scripts/update-copyrights.
* All files with FSF copyright notices: Update copyright dates
	using scripts/update-copyrights.
	* locale/programs/charmap-kw.h: Regenerated.
	* locale/programs/locfile-kw.h: Likewise.
2018-01-01 00:32:25 +00:00
Wilco Dijkstra d4505b895f Add math benchmark latency test
This patch further improves math function benchmarking by adding a latency
test in addition to throughput.  This enables more accurate comparisons of the
math functions. The latency test works by creating a dependency on the previous
iteration: func_res = F (func_res * zero + input[i]). The multiply by zero
avoids changing the input.

It reports reciprocal throughput and latency in nanoseconds (depending on the
timing header used) and max/min throughput in iterations per second:

   "workload-spec2006.wrf": {
    "reciprocal-throughput": 100,
    "latency": 200,
    "max-throughput": 1.0e+07,
    "min-throughput": 5.0e+06
   }

	* benchtests/bench-skeleton.c (main): Add support for
	latency benchmarking.
	* benchtests/scripts/bench.py: Add support for latency benchmarking.
2017-08-17 16:27:20 +01:00
Wilco Dijkstra beb52f502f Improve math benchmark infrastructure
Improve support for math function benchmarking.  This patch adds
a feature that allows accurate benchmarking of traces extracted
from real workloads.  This is done by iterating over all samples
rather than repeating each sample many times (which completely
ignores branch prediction and cache effects).  A trace can be
added to existing math function inputs via
"## name: workload-<name>", followed by the trace.

        * benchtests/README: Describe workload feature.
        * benchtests/bench-skeleton.c (main): Add support for
        benchmarking traces from workloads.
2017-06-20 16:26:26 +01:00
Joseph Myers bfff8b1bec Update copyright dates with scripts/update-copyrights. 2017-01-01 00:14:16 +00:00
Joseph Myers f7a9f785e5 Update copyright dates with scripts/update-copyrights. 2016-01-04 16:05:18 +00:00
Wilco Dijkstra cb2f668d46 Add a new benchmark for isinf/isnan/isnormal/isfinite/fpclassify. The test uses 2 arrays with 1024 doubles, one with 99% finite FP numbers (10% zeroes, 10% negative) and 1% inf/NaN, the other with 50% inf, and 50% Nan.
ChangeLog:
2015-09-18  Wilco Dijkstra  <wdijkstr@arm.com>

	* benchtests/Makefile: Add bench-math-inlines, link with libm.
	* benchtests/bench-math-inlines.c: New benchmark.
	* benchtests/bench-util.h: New file.
	* benchtests/bench-util.c: New file.
	* benchtests/bench-skeleton.c: Add include of bench-util.c/h.
2015-09-18 16:02:38 +01:00
Joseph Myers b168057aaa Update copyright dates with scripts/update-copyrights. 2015-01-02 16:29:47 +00:00
Siddhesh Poyarekar 15eaf6ffe3 benchtests: Add new directive for benchmark initialization hook
Add a new 'init' directive that specifies the name of the function to
call to do function-specific initialization.  This is useful for
benchmarks that need to do a one-time initialization before the
functions are executed.
2014-05-26 12:37:29 +05:30
Will Newton 970c602aa6 benchtests: Improve readability of JSON output
Add a small library to print JSON values and use it to improve the
readability of the benchmark output and the readability of the
benchmark code.

ChangeLog:

2014-04-11  Will Newton  <will.newton@linaro.org>

	* benchtests/Makefile (extra-objs): Add json-lib.o.
	(bench-func): Tidy up JSON output.
	* benchtests/bench-skeleton.c: Include json-lib.h.
	(main): Use JSON library functions to do output of
	benchmark results.
	* benchtests/bench-timing-type.c (main): Output the
	timing type simply, leaving formatting to the user.
	* benchtests/json-lib.c: New file.
	* benchtests/json-lib.h: Likewise.
2014-04-11 16:05:03 +01:00
Siddhesh Poyarekar 5673750800 Detailed benchmark outputs for functions
This patch adds an option to get detailed benchmark output for
functions.  Invoking the benchmark with 'make DETAILED=1 bench' causes
each benchmark program to store a mean execution time for each input
it works on.  This is useful to give a more comprehensive picture of
performance of functions compared to just the single mean figure.
2014-03-29 09:40:19 +05:30
Siddhesh Poyarekar cb5e4aada7 Make bench.out in json format
This patch changes the output format of the main benchmark output file
(bench.out) to an extensible format.  I chose JSON over XML because in
addition to being extensible, it is also not too verbose.
Additionally it has good support in python.

The significant change I have made in terms of functionality is to put
timing information as an attribute in JSON instead of a string and to
do that, there is a separate program that prints out a JSON snippet
mentioning the type of timing (hp_timing or clock_gettime).  The mean
timing has now changed from iterations per unit to actual timing per
iteration.
2014-03-29 09:37:44 +05:30
Allan McRae d4697bc93d Update copyright notices with scripts/update-copyrights 2014-01-01 22:00:23 +10:00
Will Newton b987c77672 benchtests: Rename argument to TIMING_INIT macro.
The TIMING_INIT macro currently sets the number of loop iterations
to 1000, which limits usefulness. Make the argument a clock
resolution value and multiply by 1000 in bench-skeleton.c instead
to allow easier reuse.

ChangeLog:

2013-09-11  Will Newton  <will.newton@linaro.org>

	* benchtests/bench-timing.h (TIMING_INIT): Rename ITERS
	parameter to RES. Remove hardcoded 1000 value.
	* benchtests/bench-skeleton.c (main): Pass RES parameter
	to TIMING_INIT and multiply result by 1000.
2013-09-11 15:18:20 +01:00
Siddhesh Poyarekar 43fe811b73 Use HP_TIMING for benchmarks if available
HP_TIMING uses native timestamping instructions if available, thus
greatly reducing the overhead of recording start and end times for
function calls.  For architectures that don't have HP_TIMING
available, we fall back to the clock_gettime bits.  One may also
override this by invoking the benchmark as follows:

  make USE_CLOCK_GETTIME=1 bench

and get the benchmark results using clock_gettime.  One has to do
`make bench-clean` to ensure that the benchmark programs are rebuilt.
2013-05-13 13:44:32 +05:30
Siddhesh Poyarekar 5c637fe5ee Fix coding style 2013-05-10 17:44:27 +05:30
Ondrej Bilka bb7cf681e9 Preheat CPU in benchtests.
A benchmark could be skewed by CPU initialy working on minimal
frequency and speeding up later. We first run code in loop
to partialy fix this issue.
2013-05-08 08:25:08 +02:00
Siddhesh Poyarekar f0ee064b7d Allow multiple input domains to be run in the same benchmark program
Some math functions have distinct performance characteristics in
specific domains of inputs, where some inputs return via a fast path
while other inputs require multiple precision calculations, that too
at different precision levels.  The way to implement different domains
was to have a separate source file and benchmark definition, resulting
in separate programs.

This clutters up the benchmark, so this change allows these domains to
be consolidated into the same input file.  To do this, the input file
format is now enhanced to allow comments with a preceding # and
directives with two # at the begining of a line.  A directive that
looks like:

tells the benchmark generation script that what follows is a different
domain of inputs.  The value of the 'name' directive (in this case,
foo) is used in the output.  The two input domains are then executed
sequentially and their results collated separately.  with the above
directive, there would be two lines in the result that look like:

func(): ....
func(foo): ...
2013-04-30 14:17:57 +05:30
Siddhesh Poyarekar d569c6eeb4 Maintain runtime of each benchmark at ~10 seconds
The idea to run benchmarks for a constant number of iterations is
problematic.  While the benchmarks may run for 10 seconds on x86_64,
they could run for about 30 seconds on powerpc and worse, over 3
minutes on arm.  Besides that, adding a new benchmark is cumbersome
since one needs to find out the number of iterations needed for a
sufficient runtime.

A better idea would be to run each benchmark for a specific amount of
time.  This patch does just that.  The run time defaults to 10 seconds
and it is configurable at command line:

  make BENCH_DURATION=5 bench
2013-04-30 14:10:20 +05:30
Siddhesh Poyarekar 8cfdb7e056 Framework for performance benchmarking of functions
See benchtests/Makefile to know how to use it.
2013-03-15 12:30:03 +05:30