The Go frontend passes closures through to functions using the
functions __go_set_closure and __go_get_closure. The
expectation is that there are no function calls between
set_closure and get_closure. However, it turns out that there
can be function calls if some of the function arguments
require type conversion to an interface type. Converting to
an interface type can allocate memory, and that can in turn
trigger a garbage collection, and that can in turn call pool
cleanup functions that may call __go_set_closure. So the
called function can see the wrong closure value, which is bad.
This patch fixes the problem in two different ways. First, we
move all type conversions in function arguments into temporary
variables so that they can not appear before the call to
__go_set_closure. (This required shifting the flatten phase
after the simplify_thunk phase, since the latter expects to
work with unconverted argument types.) Second, we fix the
memory allocation function to preserve the closure value
across any possible garbage collection.
A test case is the libgo database/sql check run with the
environment variable GOGC set to 1.
From-SVN: r213932
PR other/61895
runtime: Ignore small argv[0] file for backtrace.
Reportedly in some cases Docker starts processes with argv[0]
pointing to an empty file. That would cause libgo to pass
that empty file to libbacktrace, which would then fail to do
any backtraces. Everything should work fine if libbacktrace
falls back to /proc/self/exe.
This patch to libgo works around the problem by ignoring
argv[0] if it is a small file, or if stat fails. This is not
a perfect fix but it's an unusual problem.
From-SVN: r213513
This variable is unused apparently as a result of local changes.
gccgo accepts this variable declaration, but other frontends may not.
From-SVN: r212873
PR go/61620
runtime: Don't free tiny blocks in map deletion.
The memory allocator now has a special case for tiny blocks
(smaller than 16 bytes) and they can not be explicitly freed.
From-SVN: r212233
This introduces the "bench" build target, which can be used to run
all benchmarks.
It is also possible to run subsets of benchmarks with the
"package/check" build targets by setting GOBENCH to a matching regex.
From-SVN: r212212
PR go/52583
runtime: Stop backtrace at a few recognized functions.
On x86_64 Solaris the makecontext function does not properly
indicate that it is at the top of the stack. Attempting to
unwind the stack past a call to makecontext tends to crash.
This patch changes libgo to look for certain functions that
are always found at the top of the stack, and to stop
unwinding when it reaches one of those functions. There is
never anything interesting past these functions--that is,
there is never any code written by the user.
From-SVN: r211640
PR go/61498
runtime: Always set gcnext_sp to pointer-aligned address.
The gcnext_sp field is only used on systems that do not use
split stacks. It marks the bottom of the stack for the
garbage collector. This change makes sure that the stack
bottom is always aligned to a pointer value.
Previously the garbage collector would align all the addresses
that it scanned, but it now expects them to be aligned before
scanning.
From-SVN: r211639
This revision was committed January 7, 2014. The next
revision deleted runtime/mfinal.c. That will be done in a
subsequent merge.
This merge changes type descriptors to add a zero field,
pointing to a zero value for that type. This is implemented
as a common variable.
* go-gcc.cc (Gcc_backend::implicit_variable): Add is_common and
alignment parameters. Permit init parameter to be NULL.
From-SVN: r211249
This adds the --without-libatomic configure option, which is useful for building libgo
with a non-gcc compiler.
It disables libgo's dependency on libatomic. This
is useful for platforms where it is known that the libatomic runtime
functions are not required, or where the compiler automatically
provides an implementation of them.
From-SVN: r211065
LLVM's code generator does not currently support split stacks for vararg
functions, so we disable split stacks for the only function that uses this
feature under Clang. This appears to be OK as long as:
- this function only calls non-inlined, internal-linkage (hence no dynamic
loader) functions compiled with split stacks (i.e. go_vprintf), which can
allocate more stack space as required;
- this function itself does not occupy more than BACKOFF bytes of stack space
(see libgcc/config/i386/morestack.S).
These conditions are currently known to be satisfied by Clang on x86-32 and
x86-64. Note that signal handlers receive slightly less stack space than they
would normally do if they happen to be called while this function is being
run. If this turns out to be a problem we could consider increasing BACKOFF.
From-SVN: r211037
This includes the use of __complex and __builtin_ functions where
unprefixed entities would suffice, and the use of a union for
bit-casting between types.
From-SVN: r211036
PR go/60931
runtime: Fix garbage collector issue with non 4kB system page size
The go garbage collector tracks memory in terms of 4kB pages.
Most of the code checks getpagesize() at runtime and does the
right thing.
On a 64kB ppc64 box I see SEGVs in long running processes
which has been diagnosed as a bug in scavengelist.
scavengelist does a madvise(MADV_DONTNEED) without rounding
the arguments to the system page size. A strace of one of the
failures shows the problem:
madvise(0xc211030000, 4096, MADV_DONTNEED) = 0
The kernel rounds the length up to 64kB and we mark 60kB of
valid data as no longer needed.
Round start up to a system page and end down before calling
madvise.
From-SVN: r209777
A gccgo language extension allows a function to be declared multiple
times. Avoid the use of this extension by dedeplicating declarations
in mksyscall.awk.
From-SVN: r209508
Avoid the use of a gccgo language extension which allows unsafe.Sizeof
to accept a type by passing an expression of the relevant type.
From-SVN: r209503
This patch fixes a rare but serious bug. The Go garbage
collector only examines Go stacks. When Go code calls a
function that is not written in Go, it first calls
syscall.Entersyscall. Entersyscall records the position of
the Go stack pointer and saves a copy of all the registers.
If the garbage collector runs while the thread is executing
the non-Go code, the garbage collector fetches the stack
pointer and registers from the saved location.
Entersyscall saves the registers using the getcontext
function. Unfortunately I didn't consider the possibility
that Entersyscall might itself change a register before
calling getcontext. This only matters for callee-saved
registers, as caller-saved registers would be visible on the
saved stack. And it only matters if Entersyscall is compiled
to save and modify a callee-saved register before it calls
getcontext. And it only matters if a garbage collection
occurs while the non-Go code is executing. And it only
matters if the only copy of a valid Go pointer happens to be
in the callee-saved register when Entersyscall is called.
When all those conditions are true, the Go pointer might get
collected incorrectly, leading to memory corruption.
This patch tries to avoid the problem by splitting
Entersyscall into two functions. The first is a simple
function that just calls getcontext and then calls the rest of
Entersyscall. This should fix the problem, provided the
simple Entersyscall function does not itself modify any
callee-saved registers before calling getcontext. That seems
to be true on the systems I checked. But since the argument
to getcontext is an offset from a TLS variable, it won't be
true on a system which needs to save callee-saved registers in
order to get the address of a TLS variable. I don't know why
any system would work that way, but I don't know how to rule
it out. I think that on any such system this will have to be
implemented in assembler. I can't put the ucontext_t
structure on the stack, because this function can not split
stacks, and the ucontext_t structure is large enough that it
could cause a stack overflow.
From-SVN: r208390
Before this, the heap location used on a 64-bit system was not
available to user-space on arm64, so the "32-bit" strategy ended up
being used. So use somewhere that is available, and for bonus points
is far away from where the kernel allocates address space by default.
From-SVN: r207977
The spans array is allocated in runtime_mallocinit. On a
32-bit system the number of entries in the spans array is
MaxArena32 / PageSize, which (2U << 30) / (1 << 12) == (1 << 19).
So we are allocating an array that can hold 19 bits for an
index that can hold 20 bits. According to the comment in the
function, this is intentional: we only allocate enough spans
(and bitmaps) for a 2G arena, because allocating more would
probably be wasteful.
But since the span index is simply the upper 20 bits of the
memory address, this scheme only works if memory addresses are
limited to the low 2G of memory. That would be OK if we were
careful to enforce it, but we're not. What we are careful to
enforce, in functions like runtime_MHeap_SysAlloc, is that we
always return addresses between the heap's arena_start and
arena_start + MaxArena32.
We generally get away with it because we start allocating just
after the program end, so we only run into trouble with
programs that allocate a lot of memory, enough to get past
address 0x80000000.
This changes the code that computes a span index to subtract
arena_start on 32-bit systems just as we currently do on
64-bit systems.
From-SVN: r206501
I am reliably informed that the architecture name and letter for the
plan9/inferno compilers for 64-bit ARM systems will be "arm64" and "7"
respectively, so let's get that bit in nice and early.
From Michael Hudson-Doyle.
https://codereview.appspot.com/34830045/
From-SVN: r206374
On Solaris, if you do a in-progress connect, and then the
server accepts and closes the socket, the client's later
attempt to complete the connect will fail with EINVAL. Handle
this case by assuming that the connect succeeded. This code
is weird enough that it is implemented as Solaris-only so that
it doesn't hide a real error on a different OS.
See http://golang.org/issue/6828.
From-SVN: r206232
PR go/59506
net: use DialTimeout in TestSelfConnect
Backported from master repository.
This avoids problems with systems that take a long time to
find out nothing is listening, while still testing for the
self-connect misfeature since a self-connect should be fast.
With this we may be able to remove the test for non-Linux
systems.
Tested (on GNU/Linux) by editing selfConnect in
tcpsock_posix.go to always return false and verifying that
TestSelfConnect then fails with and without this change.
Idea from Uros Bizjak.
From-SVN: r206224