Go requires that pointer moves are done 8 bytes at a time,
but gccgo uses libc's memmove and memset which does not require
that, and there are some cases where an 8 byte move might be
done as 4+4.
To enforce 8 byte moves for memmove and memset, this adds a
C implementation in libgo/runtime for memmove and memset to be
used on ppc64le and ppc64. Asm implementations were considered
but discarded to avoid different implementations for different
target ISAs.
Fixesgolang/go#41428
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/294931
This does not yet include support for the //go:embed directive added
in this release.
* Makefile.am (check-runtime): Don't create check-runtime-dir.
(mostlyclean-local): Don't remove check-runtime-dir.
(check-go-tool, check-vet): Copy in go.mod and modules.txt.
(check-cgo-test, check-carchive-test): Add go.mod file.
* Makefile.in: Regenerate.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/280172
We were using the old name, but nothing noticed because it is a weak
reference that is permitted to be nil, so that it works with code that
does not use the field tracking library.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/275449
Go used to use slightly different semantics than C99 for complex division,
so we used runtime routines to handle the different. The gc compiler
has changes its behavior to match C99, so changes ours as well.
For golang/go#14644
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/274213
Overhaul the mangling scheme to avoid ambiguities if the package path
contains a dot. Instead of using dot both to separate components and
to mangle characters, use dot only to separate components and use
underscore to mangle characters.
For golang/go#41862
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/271726
Only compile the __go_ptrace varargs shim on Linux to avoid compilation
failures on some other platforms. The C ptrace function is not entirely
portable (e.g., NetBSD has `int data` instead of `void* data`), and so
far Linux is the only platform that needs the varargs shim.
Additionally, make the types in the ptrace and raw_ptrace function
declarations match. This makes it more clear that the only difference
between the two is that calls via the former are allowed to block while
calls via the latter are not.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/263517
Make the types of the addr and data arguments in the __go_ptrace shim
match the types declared in Go and the types declared by the C ptrace
function, i.e., void*. This avoids a warning about an implicit
int-to-pointer cast on some platforms.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/262340
Calls to _Unwind_RaiseException and friends *can* return due to bugs in
libgo or memory corruption. When this occurs, print a message to stderr
with the reason code before aborting to aid debugging.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/261257
AIX ptrace syscalls doesn't have the same semantic than the glibc one.
The syscall package is already handling it correctly so disable the new
__go_ptrace C function for AIX.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/256777
ptrace is actually declared as a variadic function. On ppc64le
the ABI requires to the caller to allocate space for the parameters
and allows the caller to modify them.
On ppc64le, depending on how and what version of GCC is used,
it will save to parameter save area. This happened to clobber
a saved LR, and caused syscall.TestExecPtrace to fail with a timeout
when the tracee segfaults, and waits for the parent process to inspect.
Wrap this function to avoid directly calling glibc's ptrace from go.
Fixesgolang/go#36698
Fixes go/92567
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/254755
si_code in siginfo_t is a macro on NetBSD, not a member of the
struct itself, so add a C trampoline for receiving its value.
Also replace references to mos.waitsemacount with the replacement and
add some helpers from os_netbsd.go in the GC repository.
Update golang/go#38538.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/228918
Use specific panic functions instead, which are mostly already in the
runtime package.
Also correct "defer nil" to panic when we execute the defer, rather
than throw when we queue it.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/213642
From-SVN: r279979
Right now we generate hash functions for all types, just in case they
are used as map keys. That's a lot of wasted effort and binary size
for types which will never be used as a map key. Instead, generate
hash functions only for types that we know are map keys.
Just doing that is a bit too simple, since maps with an interface type
as a key might have to hash any concrete key type that implements that
interface. So for that case, implement hashing of such types at
runtime (instead of with generated code). It will be slower, but only
for maps with interface types as keys, and maybe only a bit slower as
the aeshash time probably dominates the dispatch time.
Reorg where we keep the equals and hash functions. Move the hash function
from the key type to the map type, saving a field in every non-map type.
That leaves only one function in the alg structure, so get rid of that and
just keep the equal function in the type descriptor itself.
While we're here, reorganize the rtype struct to more closely match
the gc version.
This is the gofrontend version of https://golang.org/cl/191198.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/212843
From-SVN: r279848
The .note.GNU-stack section tells the linker that this object does not
require an executable stack.
The .note.GNU-split-stack section tells the linker that functions in
this object can be called directly by split-stack functions, without
require a large stack.
The .note.GNU-no-split-stack section tells the linker that functions
in this object do not have a split-stack prologue.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/198440
From-SVN: r276488
PR go/91781
reflect: promote integer closure return to full word
The libffi library expects an integer return type to be promoted to a
full word. Implement that when returning from a closure written in Go.
This only matters on big-endian systems when returning an integer smaller
than the pointer size, which is why we didn't notice it until now.
Fixes https://gcc.gnu.org/PR91781.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/195858
From-SVN: r275813
Permit putting structs with anonymous and empty fields in the C header
file runtime.inc that is used to build the C runtime code. This is
required for upcoming 1.13 support, as the m struct has picked up an
anonymous field.
Doing this lets the C header contain all the type descriptor structs,
so start using those in the C code. This cuts the number of copies of
type descriptor definitions from 3 to 2.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/192343
From-SVN: r275227
This is a step toward updating libgo to 1.13. This adds the 1.13
version of the osinit function to Go code, and removes the
corresponding code from the C runtime. This should simplify future updates.
Some additional 1.13 code was brought in to simplify this change.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/191717
From-SVN: r275010
Currently, getg is implemented in C, which loads the thread-local
g variable. The g variable is declared static in C.
This CL exposes the g variable, so it can be accessed from the Go
side. This allows the Go compiler to inline getg calls to direct
access of g.
Currently, the actual inlining is only implemented in the gollvm
compiler. The g variable is thread-local and the compiler backend
may choose to cache the TLS address in a register or on stack. If
a thread switch happens the cache may become invalid. I don't
know how to disable the TLS address cache in gccgo, therefore
the inlining of getg is not implemented. In the future gccgo may
gain this if we know how to do it safely.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/186238
From-SVN: r273499
Instead of going through a C function __go_memcmp, we can just
use __builtin_memcmp directly. This allows more optimizations in
the compiler backend.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/183537
From-SVN: r272620
Currently a string slice expression is implemented with a runtime
call __go_string_slice. Change it to open code it, which is more
efficient, and allows the backend to further optimize it.
Also omit the write barrier for length-only update (i.e.
s = s[:n]).
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/182540
From-SVN: r272549
When the runtime collects a stack trace to associate it with some
profiling event (mem alloc, mutex, etc) there is a skip count passed
to runtime.Callers (or equivalent) to skip some known count of frames
in order to get to the "interesting" frame corresponding to the
profile event. Now that the profiling mechanism uses lazy fixup (when
removing compiler artifacts like thunks, morestack calls etc), we also
need to move the frame skipping logic after the fixup, so as to insure
that the skip count isn't thrown off by these artifacts.
Fixesgolang/go#32290.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/179740
From-SVN: r271892
The gc compiler recognizes append(s, make([]T, n)...), and
generates code to directly zero the tail instead of allocating a
new slice and copying. This CL lets the Go frontend do basically
the same.
The difficulty is that at the point we handle append, there may
already be temporaries introduced (e.g. in order_evaluations),
which makes it hard to find the append-of-make pattern. The
compiler could "see through" the value of a temporary, but it is
only safe to do if the temporary is not assigned multiple times.
For this, we add tracking of assignments and uses for temporaries.
This also helps in optimizing non-escape slice make. We already
optimize non-escape slice make with constant len/cap to stack
allocation. But it failed to handle things like f(make([]T, n))
(where the slice doesn't escape and n is constant), because of
the temporary. With tracking of temporary assignments and uses,
it can handle this now as well.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/179597
From-SVN: r271822
Currently, goroutine switches are implemented with libc
getcontext/setcontext functions, which saves/restores the machine
register states and also the signal context. This does more than
what we need, and performs an expensive syscall.
This CL implements a simplified version of getcontext/setcontext,
in assembly, that only saves/restores the necessary part, i.e.
the callee-save registers, and the PC, SP. A simplified version
of makecontext, written in C, is also added. Currently this is
only implemented on Linux/AMD64.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/178298
From-SVN: r271818
Revise the gccgo version of memory/block/mutex profiling to reduce
runtime overhead. The main change is to collect raw stack traces while
the profile is on line, then post-process the stacks just prior to the
point where we are ready to use the final product. Memory profiling
(at a very low sampling rate) is enabled by default, and the overhead
of the symbolization / DWARF-reading from backtrace_full was slowing
things down relative to the main Go runtime.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/171497
From-SVN: r271172
runtime.throw needs a g to work properly. Set up g early, to
ensure that if something goes wrong in the runtime startup (e.g.
runtime.check fails), the program terminates in a reasonable way.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/176657
From-SVN: r271088
A direct interface is an interface whose data word contains the
actual data value, instead of a pointer to it. The gc toolchain
creates a direct interface if the value is pointer shaped, that
includes pointers (including unsafe.Pointer), functions, channels,
maps, and structs and arrays containing a single pointer-shaped
field. In gccgo, we only do this for pointers. This CL unifies
direct interface types with gc. This reduces allocations when
converting such types to interfaces.
Our method functions used to always take pointer receivers, to
make interface calls easy. Now for direct interface types, their
value methods will take value receivers. For a pointer to those
types, when converted to interface, the interface data contains
the pointer. For that interface to call a value method, it will
need a wrapper method that dereference the pointer and invokes
the value method. The wrapper method, instead of the actual one,
is put into the itable of the pointer type.
In the runtime, adjust funcPC for the new layout of interfaces of
functions.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/168409
From-SVN: r270779
Since aix/ppc64 has been added to GC toolchain, a mix between new and
old files were created in gcc toolchain.
This commit corrects this merge for aix/ppc64 and aix/ppc.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/167658
From-SVN: r269797
In signal-triggered stack scan, if the signal is delivered at
certain bad time (e.g. in vdso, or in the middle of setcontext?),
the unwinder may not be able to unwind the whole stack, while it
still reports _URC_END_OF_STACK. So we cannot rely on _URC_END_OF_STACK
to tell if it successfully scanned the stack. Instead, we check
the last Go frame to see it actually reached the end of the stack.
For Go-created stack, this is runtime.kickoff. For C-created
stack, we need to record the outermost Go frame when it enters
the Go side.
Also we cannot unwind the stack if the signal is delivered in the
middle of runtime.gogo, halfway through a goroutine switch, where
the g and the stack don't match. Give up in this case as well.
Reviewed-on: https://go-review.googlesource.com/c/159098
From-SVN: r269018
Currently, the compiler lowers runtime.getcallersp to
__builtin_frame_address(1). In the C side of the runtime,
getcallersp is defined as __builtin_frame_address(0). They don't
match. Further, neither of them actually returns the caller's SP.
On AMD64, __builtin_frame_address(0) just returns the frame
pointer. __builtin_frame_address(1) returns the memory content
where the frame pointer points to, which is typically the
caller's frame pointer but can also be garbage if the frame
pointer is not enabled.
This CL changes it to use __builtin_dwarf_cfa(), which returns
the caller's SP at the call site. This matches the SP we get
from unwinding the stack.
Currently getcallersp is not used for anything real. It will be
used for precise stack scan (a new version of CL 159098).
Reviewed-on: https://go-review.googlesource.com/c/162905
* go-gcc.cc (Gcc_backend::Gcc_backend): Define __builtin_dwarf_cfa
instead of __builtin_frame_address.
From-SVN: r268952