Remove the old locking code written in C.
Add a shell script mkrsysinfo.sh to generate the runtime_sysinfo.go
file, so that we can get Go copies of the system time structures and
other types.
Tweak the compiler so that when compiling the runtime package the
address operator does not cause local variables to escape. When the gc
compiler compiles the runtime, an escaping local variable is treated as
an error. We should implement that, instead of this change, when escape
analysis is turned on.
Tweak the compiler so that the generated C header does not include names
that start with an underscore followed by a non-upper-case letter,
except for the special cases of _defer and _panic. Otherwise we
translate C types to Go in runtime_sysinfo.go and then generate those Go
types back as C types in runtime.inc, which is useless and painful for
the C code.
Change entersyscall and friends to take a dummy argument, as the gc
versions do, to simplify calls from the shared code.
Reviewed-on: https://go-review.googlesource.com/30079
From-SVN: r240657
Also copy over cputicks.go, env_posix.go, vdso_none.go, stubs2.go, and a
part of os_linux.go. Remove the corresponding functions from the C code
in libgo/go/runtime. Add some transitional support functions to
stubs.go. This converts several minor functions from C to Go.
Reviewed-on: https://go-review.googlesource.com/29962
From-SVN: r240609
This change removes the gccgo-specific hashmap code and replaces it with
the hashmap code from the Go 1.7 runtime. The Go 1.7 hashmap code is
more efficient, does a better job on details like when to update a key,
and provides some support against denial-of-service attacks.
The compiler is changed to call the new hashmap functions instead of the
old ones.
The compiler now tracks which types are reflexive and which require
updating when used as a map key, and records the information in map type
descriptors.
Map_index_expression is simplified. The special case for a map index on
the right hand side of a tuple expression has been unnecessary for some
time, and is removed. The support for specially marking a map index as
an lvalue is removed, in favor of lowering an assignment to a map index
into a function call. The long-obsolete support for a map index of a
pointer to a map is removed.
The __go_new_map_big function (known to the compiler as
Runtime::MAKEMAPBIG) is no longer needed, as the new runtime.makemap
function takes an int64 hint argument.
The old map descriptor type and supporting expression is removed.
The compiler was still supporting the long-obsolete syntax `m[k] = 0,
false` to delete a value from a map. That is now removed, requiring a
change to one of the gccgo-specific tests.
The builtin len function applied to a map or channel p is now compiled
as `p == nil ? 0 : *(*int)(p)`. The __go_chan_len function (known to
the compiler as Runtime::CHAN_LEN) is removed.
Support for a shared zero value for maps to large value types is
introduced, along the lines of the gc compiler. The zero value is
handled as a common variable.
The hash function is changed to take a seed argument, changing the
runtime hash functions and the compiler-generated hash functions.
Unlike the gc compiler, both the hash and equal functions continue to
take the type length.
Types that can not be compared now store nil for the hash and equal
functions, rather than pointing to functions that throw. Interface hash
and comparison functions now check explicitly for nil. This matches the
gc compiler and permits a simple implementation for ismapkey.
The compiler is changed to permit marking struct and array types as
incomparable, meaning that they have no hash or equal function. We use
this for thunk types, removing the existing special code to avoid
generating hash/equal functions for them.
The C runtime code adds memclr, memequal, and memmove functions.
The hashmap code uses go:linkname comments to make the functions
visible, as otherwise the compiler would discard them.
The hashmap code comments out the unused reference to the address of the
first parameter in the race code, as otherwise the compiler thinks that
the parameter escapes and copies it onto the heap. This is probably not
needed when we enable escape analysis.
Several runtime map tests that ere previously skipped for gccgo are now
run.
The Go runtime picks up type kind information and stubs. The type kind
information causes the generated runtime header file to define some
constants, including `empty`, and the C code is adjusted accordingly.
A Go-callable version of runtime.throw, that takes a Go string, is
added to be called from the hashmap code.
Reviewed-on: https://go-review.googlesource.com/29447
* go.go-torture/execute/map-1.go: Replace old map deletion syntax
with call to builtin delete function.
From-SVN: r240334
PR go/77642
runtime: pass correct type to __splitstack_find
The code was passing uintptr* to a function that expected size_t*.
Based on patch by Andreas Krebbel.
Fixes GCC PR 77642.
Reviewed-on: https://go-review.googlesource.com/29433
From-SVN: r240275
The definition and most uses of MAKECONTEXT_STACK_TOP were removed in
https://golang.org/cl/88660043, which removed support for Solaris 8/9.
One use of MAKECONTEXT_STACK_TOP was accidentally left in the source
code. Remove it now.
Reviewed-on: https://go-review.googlesource.com/28911
From-SVN: r240045
Some systems, such as ia64 and PPC, require that a ucontext_t pointer
passed to getcontext and friends be aligned to a 16-byte boundary.
Currently the ucontext_t fields in the g structure are defined in Go,
and Go has no way to ensure a 16-byte alignment for a struct field.
The fields are currently represented by an array of unsafe.Pointer.
Enforce the alignment by making the array larger, and picking an offset
into the array that is 16-byte aligned.
Reviewed-on: https://go-review.googlesource.com/28910
From-SVN: r240044
The default stack size for the gsignal goroutine, 32K, is not enough on
ia64. Make sure that the stack size is at least SIGSTKSZ.
Reviewed-on: https://go-review.googlesource.com/28224
From-SVN: r239894
Use the new -fgo-c-header option to build a header file for the Go
runtime code in libgo/go/runtime, and use the new header file in the C
runtime code in libgo/runtime. This will ensure that the Go code and C
code share the same data structures as we convert the runtime from C to
Go.
The new file libgo/go/runtime/runtime2.go is copied from the Go 1.7
release, and then edited to remove unnecessary data structures and
modify others for use with libgo.
The new file libgo/go/runtime/mcache.go is an initial version of the
same files in the Go 1.7 release, and will be replaced by the Go 1.7
file when we convert to the new memory allocator.
The new file libgo/go/runtime/type.go describes the gccgo version of the
reflection data structures, and replaces the Go 1.7 runtime file which
describes the gc version of those structures.
Using the new header file means changing a number of struct fields to
use Go naming conventions (that is, no underscores) and to rename
constants to have a leading underscore so that they are not exported
from the Go package. These names were updated in the C code.
The C code was also changed to drop the thread-local variable m, as was
done some time ago in the gc sources. Now the m field is always
accessed using g->m, where g is the single remaining thread-local
variable. This in turn required some adjustments to set g->m correctly
in all cases.
Also pass the new -fgo-compiling-runtime option when compiling the
runtime package, although that option doesn't do anything yet.
Reviewed-on: https://go-review.googlesource.com/28051
From-SVN: r239872
PR go/72814
runtime: treat zero-sized result value as void
Change the FFI interface to treat a call to a function that returns a
zero-sized result as a call to a function that returns void.
This is part of the fix for https://gcc.gnu.org/PR72814. On 32-bit
SPARC systems, a call to a function that returns a non-zero-sized struct
is followed by an unimp instruction that describes the size of the
struct. The function returns to the address after the unimp
instruction. The libffi library can not represent a zero-sized struct,
so we wind up treating it as a 1-byte struct. Thus in that case libffi
calls the function with an unimp instruction, but the function does not
adjust the return address. The result is that the program attempts to
execute the unimp instruction, causing a crash.
This is part of a change that fixes the crash by treating all functions
that return zero bytes as functions that return void.
Reviewed-on: https://go-review.googlesource.com/25585
* go-gcc.cc (Gcc_backend::function_type): If the return type is
zero bytes, treat the function as returning void.
(return_statement): If the return type is zero bytes, don't
actually return any values.
From-SVN: r239252
Previously the libgo Makefile explicitly listed the set of files to
compile for each package. For packages that use build tags, this
required a lot of awkward automake conditionals in the Makefile.
This CL changes the build to look at the build tags in the files.
The new shell script libgo/match.sh does the matching. This required
adjusting a lot of build tags, and removing some files that are never
used. I verified that the exact same sets of files are compiled on
amd64 GNU/Linux. I also tested the build on i386 Solaris.
Writing match.sh revealed some bugs in the build tag handling that
already exists, in a slightly different form, in the gotest shell
script. This CL fixes those problems as well.
The old code used automake conditionals to handle systems that were
missing strerror_r and wait4. Rather than deal with those in Go, those
functions are now implemented in runtime/go-nosys.c when necessary, so
the Go code can simply assume that they exist.
The os testsuite looked for dir_unix.go, which was never built for gccgo
and has now been removed. I changed the testsuite to look for dir.go
instead.
Reviewed-on: https://go-review.googlesource.com/25546
From-SVN: r239189
This is a port of https://golang.org/cl/18150 to the gccgo runtime.
The previous behaviour of installing the signal handlers in a separate
thread meant that Go initialization raced with non-Go initialization if
the non-Go initialization also wanted to install signal handlers. Make
installing signal handlers synchronous so that the process-wide behavior
is predictable.
Reviewed-on: https://go-review.googlesource.com/19494
From-SVN: r233393
PR go/69511
runtime: change G gcstack_size field to size_t
Because its address is passed to __splitstack_find, which expects size_t*.
From Dominik Vogt in GCC PR 69511.
Reviewed-on: https://go-review.googlesource.com/19429
From-SVN: r233260
This fixes the long-standing bug in which the testing package misreports
the file/line of an error.
Reviewed-on: https://go-review.googlesource.com/19179
From-SVN: r233098
PR go/61303
runtime: don't overallocate in select code
If we've already allocated an fd_set, don't allocate another one.
Also, don't bother to read from rdwake if it wasn't returned in select.
Fixes https://gcc.gnu.org/PR61303.
Reviewed-on: https://go-review.googlesource.com/17243
From-SVN: r230922
PR go/68496
reflect: Allocate space for FFI functions returning a zero-sized type.
The libffi library does not understand zero-sized types. We represent
them as a struct with a single field of type void. If such a type is
returned from a function, libffi will copy 1 byte of data. Allocate
space for that byte, although we won't use it.
Fixes https://gcc.gnu.org/PR68496.
Reviewed-on: https://go-review.googlesource.com/17175
From-SVN: r230776
Only build net/hook_cloexec.go on GNU/Linux and FreeBSD, because those
are the only systems with accept4.
Add syscall/libcall_bsd.go to define sendfile for *BSD and Solaris.
Revert tcpsockopt_solaris.go back to the earlier version, so that it
works on Solaris 10.
Always pass the address of a Pid_t value to TIOCGPGRP and TIOCSPGRP.
Include <unistd.h> in runtime/go-varargs.c.
Reviewed-on: https://go-review.googlesource.com/16719
From-SVN: r229880
When not using split stacks, libgo allocate large stacks for each
goroutine. On a 64-bit system, libgo allocates a maximum of 128G for
the Go heap, and allocates 4M for each stack. When the stacks are
allocated from the Go heap, the result is that a program can only create
32K goroutines, which is not enough for an active Go server. This patch
changes libgo to allocate the stacks using mmap directly, rather than
allocating them out of the Go heap. This change is only done for 64-bit
systems when not using split stacks. When using split stacks, the
stacks are allocated using mmap directly anyhow. On a 32-bit system,
there is no maximum size for the Go heap, or, rather, the maximum size
is the available address space anyhow.
Reviewed-on: https://go-review.googlesource.com/16531
From-SVN: r229636
Type descriptors picked up a zero field because the gc map
implementation used it. However, it's since been dropped by the gc
library. It was never used by gccgo. Drop it now in preparation for
upgrading to the Go 1.5 library.
Reviewed-on: https://go-review.googlesource.com/16486
From-SVN: r229546
Change the type descriptor hash and equal functions from C code pointers
to Go func values. This permits them to be set to a Go function
closure. This is in preparation for the Go 1.5, so that we can use a
closure for the hash/equal functions returned by the new reflect.ArrayOf
function.
Reviewed-on: https://go-review.googlesource.com/16485
From-SVN: r229541
PR go/67874
net, runtime: Call C library fcntl function rather than syscall.Syscall.
Not all systems define a fcntl syscall; some only have fcntl64.
Fixes GCC PR go/67874.
Reviewed-on: https://go-review.googlesource.com/15497
From-SVN: r228576
PR go/67101
runtime: Remove call to __builtin_frame_address.
__builtin_frame_address was only supposed to use nonzero arguments
for debugging purposes. Calling it with nonzero arguments can have
unpredictable results and uses are now marked unsafe when
-Wframe-address is enabled.
Reviewed-on: https://go-review.googlesource.com/13063
From-SVN: r226525
As the removed comment states, if the package being compiled played
certain tricks with pointers that looked like integers, the compiler
might allocate space for new pointers unnecessarily. Since the type
information on the heap is now precise, this logic can be moved to the
runtime.
Reviewed-on: https://go-review.googlesource.com/11581
From-SVN: r225757
When libgo is not optimized the static function profilealloc
in malloc.goc shows up in the stack trace. Rename it to
runtime_profilealloc so that runtime/pprof.printStackRecord
ignores it.
From-SVN: r223006
These changes permit using the go tool from the upcoming Go
1.5 release with -buildmode=c-archive to build gccgo code into
an archive file that can be linked with a C program.
From-SVN: r222594
PR go/65755
compiler, runtime, reflect: Use reflection string for type comparisons.
Change the runtime and reflect libraries to always use only
the type reflection string to determine whether two types are
equal. It previously used the PkgPath and Name values for a
type name, but that required a PkgPath that did not match the
gc compiler.
Change the compiler to use the same PkgPath value as the gc
compiler in all cases.
Change the compiler to put the receiver type in the reflection
string for a type defined inside a method.
From-SVN: r222194
PR go/65349
runtime: Don't call malloc from __go_file_line callback.
When crashing, we call runtime_printcreatedby which calls
__go_file_line which used to call the Go malloc. If we are
crashing due to a signal due to heap corruption of some sort,
the GO malloc lock might already be held, leading to a crash
within a crash. Avoid that by assuming that the libbacktrace
strings will stick around, as we already do in go-callers.c.
From-SVN: r221291
Add memprofilerate as a value recognized
in the GODEBUG env var. The value provided
is used as the new setting for
runtime.MemProfileRate, allowing the user
to adjust memory profiling.
From-SVN: r220470
Change from using __go_set_closure to passing the closure
value in the static chain field. Uses new backend support for
setting the closure chain in a call from C via
__builtin_call_with_static_chain. Uses new support in libffi
for Go closures.
The old architecture specific support for reflect.MakeFunc is
removed, replaced by the libffi support.
All work done by Richard Henderson.
* go-gcc.cc (Gcc_backend::call_expression): Add chain_expr argument.
(Gcc_backend::static_chain_variable): New method.
From-SVN: r219776
A recent libffi upgrade caused the reflect test to fail on
386. The problem case is a function that returns an empty
struct--a struct with no fields. The libffi library does not
recognize the existence of empty structs, presumably since
they can't happen in C. To work around this, the Go interface
to the libffi library changes an empty struct to void. This
normally works fine, but with the new libffi upgrade it fails
for a function that returns an empty struct. On 386 a
function that returns a struct is expected to pop the hidden
pointer when it returns. So when we convert an empty struct
to void, libffi is calling a function that pops the hidden
pointer but does not expect that to happen.
In the older version of libffi, this didn't matter, because
the libffi code for 386 used a frame pointer, so the fact that
the stack pointer was wonky when the function returned was
ignored as the stack pointer was immediately replaced by the
saved frame pointer. In the newer version of libffi, the 386
code is more efficient and does not use a frame pointer, and
therefore it matters whether libffi expects the function to
pop the hidden pointer or not.
This patch changes libgo to convert an empty to a struct with
a single field of type void. This seems to be enough to get
the test cases working again.
Of course the real fix would be to change libffi to handle
empty types, but as libffi uses size == 0 as a marker for an
uninitialized type, that would be a non-trivial change.
From-SVN: r219701
This upgrades all of libgo other than the runtime package to
the Go 1.4 release. In Go 1.4 much of the runtime was
rewritten into Go. Merging that code will take more time and
will not change the API, so I'm putting it off for now.
There are a few runtime changes anyhow, to accomodate other
packages that rely on minor modifications to the runtime
support.
The compiler changes slightly to add a one-bit flag to each
type descriptor kind that is stored directly in an interface,
which for gccgo is currently only pointer types. Another
one-bit flag (gcprog) is reserved because it is used by the gc
compiler, but gccgo does not currently use it.
There is another error check in the compiler since I ran
across it during testing.
gotools/:
* Makefile.am (go_cmd_go_files): Sort entries. Add generate.go.
* Makefile.in: Rebuild.
From-SVN: r219627
Fix an unusual C to Go callback case. Newly created C threads
call into Go code, forcing the Go code to allocate new M and G
structures. While executing Go code, the stack is split. The
Go code then returns. Returning from a Go callback is treated
as entering a system call, so the G gcstack field is set to
point to the Go stack. In this case, though, we were called
from a newly created C thread, so we drop the extra M and G
structures. The C thread then exits.
Then a new C thread calls into Go code, reusing the previously
created M and G. The Go code requires a larger stack frame,
causing the old stack segment to be unmapped and a new stack
segment allocated. At this point the gcstack field is
pointing to the old stack segment.
Then a garbage collection occurs. The garbage collector sees
that the gcstack field is not nil, so it scans it as the first
stack segment. Unfortunately it points to memory that was
unmapped. So the program crashes.
The fix is simple: when handling extra G structures created
for callbacks from new C threads, clear the gcstack field.
From-SVN: r218699
From Dominik Vogt.
* libgo/go/syscall/libcall_linux_s390.go: New file for s390 support.
* libgo/go/syscall/syscall_linux_s390.go: Ditto.
* libgo/go/syscall/libcall_linux_s390x.go: New file for s390x support.
* libgo/go/syscall/syscall_linux_s390x.go: Ditto.
* libgo/go/runtime/pprof/pprof.go (printStackRecord): Support s390 and
s390x.
* libgo/runtime/runtime.c (runtime_cputicks): Add support for s390 and
s390x
* libgo/mksysinfo.sh: Ditto.
(upcase_fields): New helper function
* libgo/go/debug/elf/file.go (applyRelocations): Implement relocations
on s390x.
(applyRelocationsS390x): Ditto.
(DWARF): Ditto.
* libgo/go/debug/elf/elf.go (R_390): New constants for S390 relocations.
(r390Strings): Ditto.
(String): Helper function for S390 relocations.
(GoString): Ditto.
* libgo/go/reflect/makefuncgo_s390.go: New file.
(S390MakeFuncStubGo): Implementation of s390 abi.
* libgo/go/reflect/makefuncgo_s390x.go: New file.
(S390xMakeFuncStubGo): Implementation of s390x abi.
* libgo/go/reflect/makefunc_s390.c: New file.
(makeFuncStub): s390 and s390x specific implementation of function.
* libgo/go/reflect/makefunc.go
(MakeFunc): Add support for s390 and s390x.
(makeMethodValue): Ditto.
(makeValueMethod): Ditto.
* libgo/Makefile.am (go_reflect_makefunc_s_file): Ditto.
(go_reflect_makefunc_file): Ditto.
* libgo/go/reflect/makefunc_dummy.c: Ditto.
* libgo/runtime/runtime.h (__go_makefunc_can_recover): Export prototype
for use in makefunc_s390.c.
(__go_makefunc_returning): Ditto.
* libgo/go/syscall/exec_linux.go (forkAndExecInChild): Fix order of the
arguments of the clone system call for s390[x].
* libgo/configure.ac (is_s390): New variable.
(is_s390x): Ditto
(LIBGO_IS_S390): Ditto.
(LIBGO_IS_S390X): Ditto.
(GOARCH): Support s390 and s390x.
* libgo/go/go/build/build.go (cgoEnabled): Ditto.
* libgo/go/go/build/syslist.go (goarchList): Ditto.
From-SVN: r217106
We want to create goroutines with a small stack, at least on
systems where split stacks are supported. We don't need to
create threads with a small stack.
From-SVN: r216353
PR go/60406
runtime: Check callers in can_recover if return address doesn't match.
Also use __builtin_extract_return_address and tighten up the
checks in FFI code.
Fixes PR 60406.
From-SVN: r216003
If the compiler inlines this function into kickoff, it may reuse
the TLS block address to load g. However, this is not necessarily
correct, as the call to g->entry in kickoff may cause the TLS
address to change. If the wrong value is loaded for g->status in
runtime_goexit, it may cause a runtime panic.
By marking the function as noinline we prevent the compiler from
reusing the TLS address.
From-SVN: r215484
The Go frontend passes closures through to functions using the
functions __go_set_closure and __go_get_closure. The
expectation is that there are no function calls between
set_closure and get_closure. However, it turns out that there
can be function calls if some of the function arguments
require type conversion to an interface type. Converting to
an interface type can allocate memory, and that can in turn
trigger a garbage collection, and that can in turn call pool
cleanup functions that may call __go_set_closure. So the
called function can see the wrong closure value, which is bad.
This patch fixes the problem in two different ways. First, we
move all type conversions in function arguments into temporary
variables so that they can not appear before the call to
__go_set_closure. (This required shifting the flatten phase
after the simplify_thunk phase, since the latter expects to
work with unconverted argument types.) Second, we fix the
memory allocation function to preserve the closure value
across any possible garbage collection.
A test case is the libgo database/sql check run with the
environment variable GOGC set to 1.
From-SVN: r213932
PR other/61895
runtime: Ignore small argv[0] file for backtrace.
Reportedly in some cases Docker starts processes with argv[0]
pointing to an empty file. That would cause libgo to pass
that empty file to libbacktrace, which would then fail to do
any backtraces. Everything should work fine if libbacktrace
falls back to /proc/self/exe.
This patch to libgo works around the problem by ignoring
argv[0] if it is a small file, or if stat fails. This is not
a perfect fix but it's an unusual problem.
From-SVN: r213513
PR go/61620
runtime: Don't free tiny blocks in map deletion.
The memory allocator now has a special case for tiny blocks
(smaller than 16 bytes) and they can not be explicitly freed.
From-SVN: r212233
PR go/52583
runtime: Stop backtrace at a few recognized functions.
On x86_64 Solaris the makecontext function does not properly
indicate that it is at the top of the stack. Attempting to
unwind the stack past a call to makecontext tends to crash.
This patch changes libgo to look for certain functions that
are always found at the top of the stack, and to stop
unwinding when it reaches one of those functions. There is
never anything interesting past these functions--that is,
there is never any code written by the user.
From-SVN: r211640
PR go/61498
runtime: Always set gcnext_sp to pointer-aligned address.
The gcnext_sp field is only used on systems that do not use
split stacks. It marks the bottom of the stack for the
garbage collector. This change makes sure that the stack
bottom is always aligned to a pointer value.
Previously the garbage collector would align all the addresses
that it scanned, but it now expects them to be aligned before
scanning.
From-SVN: r211639
This revision was committed January 7, 2014. The next
revision deleted runtime/mfinal.c. That will be done in a
subsequent merge.
This merge changes type descriptors to add a zero field,
pointing to a zero value for that type. This is implemented
as a common variable.
* go-gcc.cc (Gcc_backend::implicit_variable): Add is_common and
alignment parameters. Permit init parameter to be NULL.
From-SVN: r211249
LLVM's code generator does not currently support split stacks for vararg
functions, so we disable split stacks for the only function that uses this
feature under Clang. This appears to be OK as long as:
- this function only calls non-inlined, internal-linkage (hence no dynamic
loader) functions compiled with split stacks (i.e. go_vprintf), which can
allocate more stack space as required;
- this function itself does not occupy more than BACKOFF bytes of stack space
(see libgcc/config/i386/morestack.S).
These conditions are currently known to be satisfied by Clang on x86-32 and
x86-64. Note that signal handlers receive slightly less stack space than they
would normally do if they happen to be called while this function is being
run. If this turns out to be a problem we could consider increasing BACKOFF.
From-SVN: r211037
This includes the use of __complex and __builtin_ functions where
unprefixed entities would suffice, and the use of a union for
bit-casting between types.
From-SVN: r211036
PR go/60931
runtime: Fix garbage collector issue with non 4kB system page size
The go garbage collector tracks memory in terms of 4kB pages.
Most of the code checks getpagesize() at runtime and does the
right thing.
On a 64kB ppc64 box I see SEGVs in long running processes
which has been diagnosed as a bug in scavengelist.
scavengelist does a madvise(MADV_DONTNEED) without rounding
the arguments to the system page size. A strace of one of the
failures shows the problem:
madvise(0xc211030000, 4096, MADV_DONTNEED) = 0
The kernel rounds the length up to 64kB and we mark 60kB of
valid data as no longer needed.
Round start up to a system page and end down before calling
madvise.
From-SVN: r209777
This patch fixes a rare but serious bug. The Go garbage
collector only examines Go stacks. When Go code calls a
function that is not written in Go, it first calls
syscall.Entersyscall. Entersyscall records the position of
the Go stack pointer and saves a copy of all the registers.
If the garbage collector runs while the thread is executing
the non-Go code, the garbage collector fetches the stack
pointer and registers from the saved location.
Entersyscall saves the registers using the getcontext
function. Unfortunately I didn't consider the possibility
that Entersyscall might itself change a register before
calling getcontext. This only matters for callee-saved
registers, as caller-saved registers would be visible on the
saved stack. And it only matters if Entersyscall is compiled
to save and modify a callee-saved register before it calls
getcontext. And it only matters if a garbage collection
occurs while the non-Go code is executing. And it only
matters if the only copy of a valid Go pointer happens to be
in the callee-saved register when Entersyscall is called.
When all those conditions are true, the Go pointer might get
collected incorrectly, leading to memory corruption.
This patch tries to avoid the problem by splitting
Entersyscall into two functions. The first is a simple
function that just calls getcontext and then calls the rest of
Entersyscall. This should fix the problem, provided the
simple Entersyscall function does not itself modify any
callee-saved registers before calling getcontext. That seems
to be true on the systems I checked. But since the argument
to getcontext is an offset from a TLS variable, it won't be
true on a system which needs to save callee-saved registers in
order to get the address of a TLS variable. I don't know why
any system would work that way, but I don't know how to rule
it out. I think that on any such system this will have to be
implemented in assembler. I can't put the ucontext_t
structure on the stack, because this function can not split
stacks, and the ucontext_t structure is large enough that it
could cause a stack overflow.
From-SVN: r208390
Before this, the heap location used on a 64-bit system was not
available to user-space on arm64, so the "32-bit" strategy ended up
being used. So use somewhere that is available, and for bonus points
is far away from where the kernel allocates address space by default.
From-SVN: r207977
The spans array is allocated in runtime_mallocinit. On a
32-bit system the number of entries in the spans array is
MaxArena32 / PageSize, which (2U << 30) / (1 << 12) == (1 << 19).
So we are allocating an array that can hold 19 bits for an
index that can hold 20 bits. According to the comment in the
function, this is intentional: we only allocate enough spans
(and bitmaps) for a 2G arena, because allocating more would
probably be wasteful.
But since the span index is simply the upper 20 bits of the
memory address, this scheme only works if memory addresses are
limited to the low 2G of memory. That would be OK if we were
careful to enforce it, but we're not. What we are careful to
enforce, in functions like runtime_MHeap_SysAlloc, is that we
always return addresses between the heap's arena_start and
arena_start + MaxArena32.
We generally get away with it because we start allocating just
after the program end, so we only run into trouble with
programs that allocate a lot of memory, enough to get past
address 0x80000000.
This changes the code that computes a span index to subtract
arena_start on 32-bit systems just as we currently do on
64-bit systems.
From-SVN: r206501
Fixes issue 6761
This simple change seems to work fine, slightly to my surprise.
This includes the tests I submitted to the main Go repository at
https://codereview.appspot.com/26570046
From-SVN: r205001
This changes the compiler and runtime to not pass a closure
value as the last argument, but to instead pass it via
__go_set_closure and retrieve it via __go_get_closure. This
eliminates the need for function descriptor wrapper functions.
It will make it possible to retrieve the closure value in a
reflect.MakeFunc function.
From-SVN: r202233
A function that returns an interface type and returns a value
that requires memory allocation will try to allocate while
appearing to be in a syscall. This patch lets that work.
From-SVN: r201226
This fixes a problem on Solaris, where end is not defined in
the main program but comes from some shared library. This
only matters for 32-bit targets.
From-SVN: r201220
This changes the representation of a Go value of function type
from being a pointer to function code (like a C function
pointer) to being a pointer to a struct. The first field of
the struct points to the function code. The remaining fields,
if any, are the addresses of variables referenced in enclosing
functions. For each call to a function, the address of the
function descriptor is passed as the last argument.
This lets us avoid generating trampolines, and removes the use
of writable/executable sections of the heap.
From-SVN: r200181
PR go/56320
runtime: Support Solaris AMD64 in lfstack.
The address space layout is similar on SPARC64 and AMD64 when
running Solaris.
From-SVN: r196179
The mmap() call which reserves the arena should have MAP_NORESERVE
flag as in typical cases this memory will never be (fully) needed.
This matters in environments which do not do Linux style memory
overcommit, such as OpenIndiana/OpenSolaris/Solaris.
The MAP_NORESERVE flag does not exist on all operating systems
(for example FreeBSD). Therefore we define it to zero value in
case it does not exist.
Fixes issue 21.
From-SVN: r196088