This patch increases timeouts on some tests I've observed timing out.
elf/tst-tls13 and iconvdata/tst-loading both dynamically load many
objects and so are slow when testing over NFS. They had timeouts set
from before the default changed from 2 to 20 seconds; this patch
removes those old settings, so effectively increasing the timeout to
20 seconds (from 3 and 10 seconds respectively).
malloc/tst-malloc-thread-fail.c and malloc/tst-mallocfork2.c are slow
on slow systems and so I set a fairly arbitrary 100 second timeout,
which seems to suffice on the system where I saw them timing out.
nss/tst-cancel-getpwuid_r.c and nss/tst-nss-getpwent.c are slow on
systems with a large passwd file; I set timeouts that empirically
worked for me. (It seems tst-cancel-getpwuid_r.c is hitting the
100000 getpwuid_r call limit in my testing, with each call taking a
bit over 0.007 seconds, so 700 seconds for the test.)
* elf/tst-tls13.c (TIMEOUT): Remove.
* iconvdata/tst-loading.c (TIMEOUT): Likewise.
* malloc/tst-malloc-thread-fail.c (TIMEOUT): Increase to 100.
* malloc/tst-mallocfork2.c (TIMEOUT): Define to 100.
* nss/tst-cancel-getpwuid_r.c (TIMEOUT): Define to 900.
* nss/tst-nss-getpwent.c (TIMEOUT): Define to 300.
GCC 7 has a -Walloc-size-larger-than= warning for allocations of half
the address space or more. This causes errors building glibc tests
that deliberately test failure of very large allocations. This patch
arranges for this warning to be ignored around the problematic
function calls.
Tested compilation for aarch64 (GCC mainline) with
build-many-glibcs.py; did execution testing for x86_64 (GCC 5).
* malloc/tst-malloc.c: Include <libc-internal.h>.
(do_test): Disable -Walloc-size-larger-than= around tests of
malloc with negative sizes.
* malloc/tst-mcheck.c: Include <libc-internal.h>.
(do_test): Disable -Walloc-size-larger-than= around tests of
malloc and realloc with negative sizes.
* malloc/tst-realloc.c: Include <libc-internal.h>.
(do_test): Disable -Walloc-size-larger-than= around tests of
realloc with negative sizes.
Read tunables values from the users using the GLIBC_TUNABLES
environment variable. The value of this variable is a colon-separated
list of name=value pairs. So a typical string would look like this:
GLIBC_TUNABLES=glibc.malloc.mmap_threshold=2048:glibc.malloc.trim_threshold=1024
* config.make.in (have-loop-to-function): Define.
* elf/Makefile (CFLAGS-dl-tunables.c): Add
-fno-tree-loop-distribute-patterns.
* elf/dl-tunables.c: Include libc-internals.h.
(GLIBC_TUNABLES): New macro.
(tunables_strdup): New function.
(parse_tunables): New function.
(min_strlen): New function.
(__tunables_init): Use the new functions and macro.
(disable_tunable): Disable tunable from GLIBC_TUNABLES.
* malloc/tst-malloc-usable-tunables.c: New test case.
* malloc/tst-malloc-usable-static-tunables.c: New test case.
* malloc/Makefile (tests, tests-static): Add tests.
The tunables framework allows us to uniformly manage and expose global
variables inside glibc as switches to users. tunables/README has
instructions for glibc developers to add new tunables.
Tunables support can be enabled by passing the --enable-tunables
configure flag to the configure script. This patch only adds a
framework and does not pose any limitations on how tunable values are
read from the user. It also adds environment variables used in malloc
behaviour tweaking to the tunables framework as a PoC of the
compatibility interface.
* manual/install.texi: Add --enable-tunables option.
* INSTALL: Regenerate.
* README.tunables: New file.
* Makeconfig (CPPFLAGS): Define TOP_NAMESPACE.
(before-compile): Generate dl-tunable-list.h early.
* config.h.in: Add HAVE_TUNABLES.
* config.make.in: Add have-tunables.
* configure.ac: Add --enable-tunables option.
* configure: Regenerate.
* csu/init-first.c (__libc_init_first): Move
__libc_init_secure earlier...
* csu/init-first.c (LIBC_START_MAIN):... to here.
Include dl-tunables.h, libc-internal.h.
(LIBC_START_MAIN) [!SHARED]: Initialize tunables for static
binaries.
* elf/Makefile (dl-routines): Add dl-tunables.
* elf/Versions (ld): Add __tunable_set_val to GLIBC_PRIVATE
namespace.
* elf/dl-support (_dl_nondynamic_init): Unset MALLOC_CHECK_
only when !HAVE_TUNABLES.
* elf/rtld.c (process_envvars): Likewise.
* elf/dl-sysdep.c [HAVE_TUNABLES]: Include dl-tunables.h
(_dl_sysdep_start): Call __tunables_init.
* elf/dl-tunable-types.h: New file.
* elf/dl-tunables.c: New file.
* elf/dl-tunables.h: New file.
* elf/dl-tunables.list: New file.
* malloc/tst-malloc-usable-static.c: New test case.
* malloc/Makefile (tests-static): Add it.
* malloc/arena.c [HAVE_TUNABLES]: Include dl-tunables.h.
Define TUNABLE_NAMESPACE.
(DL_TUNABLE_CALLBACK (set_mallopt_check)): New function.
(DL_TUNABLE_CALLBACK_FNDECL): New macro. Use it to define
callback functions.
(ptmalloc_init): Set tunable values.
* scripts/gen-tunables.awk: New file.
* sysdeps/mach/hurd/dl-sysdep.c: Include dl-tunables.h.
(_dl_sysdep_start): Call __tunables_init.
The new test driver in <support/test-driver.c> has feature parity with
the old one. The main difference is that its hooking mechanism is
based on functions and function pointers instead of macros. This
commit also implements a new environment variable, TEST_COREDUMPS,
which disables the code which disables coredumps (that is, it enables
them if the invocation environment has not disabled them).
<test-skeleton.c> defines wrapper functions so that it is possible to
use existing macros with the new-style hook functionality.
This commit changes only a few test cases to the new test driver, to
make sure that it works as expected.
Make mallopt helper functions for each mallopt parameter so that it
can be called consistently in other areas, like setting tunables.
* malloc/malloc.c (do_set_mallopt_check): New function.
(do_set_mmap_threshold): Likewise.
(do_set_mmaps_max): Likewise.
(do_set_top_pad): Likewise.
(do_set_perturb_byte): Likewise.
(do_set_trim_threshold): Likewise.
(do_set_arena_max): Likewise.
(do_set_arena_test): Likewise.
(__libc_mallopt): Use them.
After the removal of __malloc_initialize_hook, newly compiled
Emacs binaries are no longer able to use these interfaces.
malloc_get_state is only used during the Emacs build process,
so we provide a stub implementation only. Existing Emacs binaries
will not call this stub function, but still reference the symbol.
The rewritten tst-mallocstate test constructs a dumped heap
which should approximates what existing Emacs binaries pass
to glibc malloc.
The M_ARENA_MAX and M_ARENA_TEST macros are defined in malloc.c as
well as malloc.h, and the former is unnecessary. This patch removes
the duplicate. Tested on x86_64 to verify that the generated code
remains unchanged barring changed line numbers to __malloc_assert.
* malloc/malloc.c (M_ARENA_TEST, M_ARENA_MAX): Remove.
The M_ARENA_* mallopt parameters are in wide use in production to
control the number of arenas that a long lived process creates and
hence there is no point in stating that this interface is non-public.
Document this interface and remove the obsolete comment.
* manual/memory.texi (M_ARENA_TEST): Add documentation.
(M_ARENA_MAX): Likewise.
* malloc/malloc.c: Remove obsolete comment.
This is a trivial change to add the static tests only to tests-static
and then adding all of tests-static to the tests target to make it
look consistent with some other Makefiles. This avoids having to
duplicate the test names across the two make targets.
* malloc/Makefile (tests): Remove individual static test names
and just add all of tests-static.
Existing interposed mallocs do not define the glibc-internal
fork callbacks (and they should not), so statically interposed
mallocs lead to link failures because the strong reference from
fork pulls in glibc's malloc, resulting in multiple definitions
of malloc-related symbols.
The dynamic linker currently uses __libc_memalign for TLS-related
allocations. The goal is to switch to malloc instead. If the minimal
malloc follows the ABI fundamental alignment, we can assume that malloc
provides this alignment, and thus skip explicit alignment in a few
cases as an optimization.
It was requested on libc-alpha that MALLOC_ALIGNMENT should be used,
although this results in wasted space if MALLOC_ALIGNMENT is larger
than the fundamental alignment. (The dynamic linker cannot assume
that the non-minimal malloc will provide an alignment of
MALLOC_ALIGNMENT; the ABI provides _Alignof (max_align_t) only.)
It is necessary to preserve the invariant that if an arena is
on the free list, it has thread attach count zero. Otherwise,
when arena_thread_freeres sees the zero attach count, it will
add it, and without the invariant, an arena could get pushed
to the list twice, resulting in a cycle.
One possible execution trace looks like this:
Thread 1 examines free list and observes it as empty.
Thread 2 exits and adds its arena to the free list,
with attached_threads == 0).
Thread 1 selects this arena in reused_arena (not from the free list).
Thread 1 increments attached_threads and attaches itself.
(The arena remains on the free list.)
Thread 1 exits, decrements attached_threads,
and adds the arena to the free list.
The final step creates a cycle in the usual way (by overwriting the
next_free member with the former list head, while there is another
list item pointing to the arena structure).
tst-malloc-thread-exit exhibits this issue, but it was only visible
with a debugger because the incorrect fix in bug 19243 removed
the assert from get_free_list.
Right now tilegx is right on the verge of timeout when it runs,
so adding a bit of headroom seems like the right thing; we
see failures when running tests in parallel.
Before this change, the while loop in reused_arena which avoids
returning a corrupt arena would never execute its body if the selected
arena were not corrupt. As a result, result == begin after the loop,
and the function returns NULL, triggering fallback to mmap.
__malloc_initialize_hook is interposed by application code, so
the usual approach to define a compatibility symbol does not work.
This commit adds a new mechanism based on #pragma GCC poison in
<stdc-predef.h>.
For regular mmapped chunks there are two size fields (hence a reduction
by 2 * SIZE_SZ bytes), but for fake chunks, we only have one size field,
so we need to subtract SIZE_SZ bytes.
This was initially reported as Emacs bug 23726.
After the heap rewriting added in commit
4cf6c72fd2 (malloc: Rewrite dumped heap
for compatibility in __malloc_set_state), we can change malloc alignment
for new allocations because the alignment of old allocations no longer
matters.
We need to increase the malloc state version number, so that binaries
containing dumped heaps of the new layout will not try to run on
previous versions of glibc, resulting in obscure crashes.
This commit addresses a failure of tst-malloc-thread-fail on the
affected architectures (32-bit ppc and mips) because the test checks
pointer alignment.
The first SIGUSR1 signal could arrive when sigusr1_sender_pid
was still 0. As a result, kill would send SIGSTOP to the
entire process group. This would cause the test to hang before
printing any output.
This commit also adds a sched_yield to the signal source, so that
it does not flood the parent process with signals it has never a
chance to handle.
Even with these changes, tst-mallocfork2 still fails reliably
after the fix in commit commit 56290d6e76
(Increase fork signal safety for single-threaded processes) is
backed out.
This will allow us to change many aspects of the malloc implementation
while preserving compatibility with existing Emacs binaries.
As a result, existing Emacs binaries will have a larger RSS, and Emacs
needs a few more milliseconds to start. This overhead is specific
to Emacs (and will go away once Emacs switches to its internal malloc).
The new checks to make free and realloc compatible with the dumped heap
are confined to the mmap paths, which are already quite slow due to the
munmap overhead.
This commit weakens some security checks, but only for heap pointers
in the dumped main arena. By default, this area is empty, so those
checks are as effective as before.
This provides a band-aid and addresses the scenario where fork is
called from a signal handler while the process is in the malloc
subsystem (or has acquired the libio list lock). It does not
address the general issue of async-signal-safety of fork;
multi-threaded processes are not covered, and some glibc
subsystems have fork handlers which are not async-signal-safe.
The fork handler now runs so late that there is no risk anymore that
other fork handlers in the same thread use malloc, so it is no
longer necessary to install malloc hooks which made a subset
of malloc functionality available to the thread that called fork.
Previously, a thread M invoking fork would acquire locks in this order:
(M1) malloc arena locks (in the registered fork handler)
(M2) libio list lock
A thread F invoking flush (NULL) would acquire locks in this order:
(F1) libio list lock
(F2) individual _IO_FILE locks
A thread G running getdelim would use this order:
(G1) _IO_FILE lock
(G2) malloc arena lock
After executing (M1), (F1), (G1), none of the threads can make progress.
This commit changes the fork lock order to:
(M'1) libio list lock
(M'2) malloc arena locks
It explicitly encodes the lock order in the implementations of fork,
and does not rely on the registration order, thus avoiding the deadlock.
* malloc/Makefile ($(objpfx)tst-malloc-backtrace,
$(objpfx)tst-malloc-thread-exit, $(objpfx)tst-malloc-thread-fail): Use
$(shared-thread-library) instead of hardcoding the path to libpthread.
This test case exercises unusual code paths in allocation functions,
related to allocation failures. Specifically, the test can reveal
the following bugs:
(a) calloc returns non-zero memory on fallback to sysmalloc.
(b) calloc can self-deadlock because it fails to release
the arena lock on certain allocation failures.
(c) pvalloc can dereference a NULL arena pointer.
(a) and (b) appear specific to a faulty downstream backport.
(c) was fixed as part of commit 10ad46bc65.
The test for (a) was inspired by a reproducer supplied by Jeff Layton.
* malloc/arena.c (list_lock): Document lock ordering requirements.
(free_list_lock): New lock.
(ptmalloc_lock_all): Comment on free_list_lock.
(ptmalloc_unlock_all2): Reinitialize free_list_lock.
(detach_arena): Update comment. free_list_lock is now needed.
(_int_new_arena): Use free_list_lock around detach_arena call.
Acquire arena lock after list_lock. Add comment, including FIXME
about incorrect synchronization.
(get_free_list): Switch to free_list_lock.
(reused_arena): Acquire free_list_lock around detach_arena call
and attached threads counter update. Add two FIXMEs about
incorrect synchronization.
(arena_thread_freeres): Switch to free_list_lock.
* malloc/malloc.c (struct malloc_state): Update comments to
mention free_list_lock.