This only matters on systems that pass a struct with a single pointer
field differently than passing a single pointer. I noticed it on
32-bit PPC, where the reflect package TestDirectIfaceMethod failed.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/195878
From-SVN: r275814
Restore Solaris compatibility fixes lost when internal/x/net/lif moved
to golang.org/x/net/lif. Also fix the Makefile for x/net/lif and
x/net/route.
Change x/sys/cpu to get the cache line size from goarch.sh as the
gofrontend version of internal/cpu does.
Partially based on work by Rainer Orth.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/194438
From-SVN: r275611
The gc compiler has started permitting go:linkname comments with a
single argument to mean that a function should be externally visible
outside the package. Implement this in the Go frontend.
Change the libgo runtime package to use it, rather than repeating the
name just to export a function.
Remove a couple of unnecessary go:linkname comments on declarations.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/192197
From-SVN: r275239
Avoids problems with arm64 ILP32 mode. We might want to handle that
mode better in general, but always building panic32.go is simple and
fixes the build.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/192723
From-SVN: r275237
This is a step toward updating libgo to 1.13. This adds the 1.13
version of the osinit function to Go code, and removes the
corresponding code from the C runtime. This should simplify future updates.
Some additional 1.13 code was brought in to simplify this change.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/191717
From-SVN: r275010
Record when a local pointer variable is set to a value such that
indirecting through the pointer does not require a write barrier. Use
that to eliminate write barriers when indirecting through that local
pointer variable. Only keep this information per-block, so it's not
all that applicable.
This reduces the number of write barriers generated when compiling the
runtime package from 553 to 524.
The point of this is to eliminate a bad write barrier in the bytes
function in runtime/print.go. Mark that function nowritebarrier so
that the problem does not recur.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/191581
From-SVN: r274890
With CL 190599, along with what we do in greyobject, we ensure
that we only mark allocated heap objects. As a result we can be
more strict in GC:
- Enable "sweep increased allocation count" check, which checks
that the number of mark bits set are no more than the number of
allocation bits.
- Enable invalid pointer check on heap scan. We only trace
allocated heap objects, which should not contain invalid
pointer.
This also makes the libgo runtime more convergent with the gc
runtime.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/190797
From-SVN: r274678
When a defer is executed at most once in a function body,
we can allocate the defer record for it on the stack instead
of on the heap.
This should make defers like this (which are very common) faster.
This is a port of CL 171758 from the gc repo.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/190410
From-SVN: r274613
In gccgo, we insert the write barriers in the frontend, and so we
cannot completely prevent write barriers on stack writes. So it
is possible for a bad pointer appearing in the write barrier
buffer. When flushing the write barrier, treat it the same as
sacnning the stack. In particular, don't mark a pointer if it
does not point to an allocated object. We already have similar
logic in greyobject. With this, hopefully, we can prevent an
unallocated object from being marked completely.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/190599
From-SVN: r274598
For a select statement with zero-, one-, or two-case with a
default case, we can generate simpler code instead of calling the
generic selectgo. A zero-case select is just blocking the
execution. A one-case select is mostly just executing the case. A
two-case select with a default case is a non-blocking send or
receive. We add these special cases for lowering a select
statement.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/184998
From-SVN: r273034
runtime.concatstring{2,3,4,5} are just wrappers of concatstrings.
These wrappers don't provide any benefit, at least in the C
calling convention we use, where passing arrays by value isn't an
efficient thing. Change it to always use concatstrings.
Also, the cap field of the slice passed to concatstrings is not
necessary. So change it to pass a pointer and a length directly,
which is more efficient than passing a slice header by value.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/182539
From-SVN: r272476
In the runtime there are specialized fast map routines for
certain kep types. This CL lets the compiler make use of these
functions, instead of always using the generic ones.
As we now generate multiple versions of map delete calls, to make
things easier we delay the expansion of the built-in delete
function to flatten phase.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/180858
From-SVN: r271983
Currently, the compiler already generates common symbols for type
descriptors, so the type descriptors are unique. However, when a
type is created through reflection, it is not deduplicated with
compiler-generated types. As a consequence, we cannot assume type
descriptors are unique, and cannot use pointer equality to
compare them. Also, when constructing a reflect.Type, it has to
go through a canonicalization map, which introduces overhead to
reflect.TypeOf, and lock contentions in concurrent programs.
In order for the reflect package to deduplicate types with
compiler-created types, we register all the compiler-created type
descriptors at startup time. The reflect package, when it needs
to create a type, looks up the registry of compiler-created types
before creates a new one. There is no lock contention since the
registry is read-only after initialization.
This lets us get rid of the canonicalization map, and also makes
it possible to compare type descriptors with pointer equality.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/179598
From-SVN: r271894
When the runtime collects a stack trace to associate it with some
profiling event (mem alloc, mutex, etc) there is a skip count passed
to runtime.Callers (or equivalent) to skip some known count of frames
in order to get to the "interesting" frame corresponding to the
profile event. Now that the profiling mechanism uses lazy fixup (when
removing compiler artifacts like thunks, morestack calls etc), we also
need to move the frame skipping logic after the fixup, so as to insure
that the skip count isn't thrown off by these artifacts.
Fixesgolang/go#32290.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/179740
From-SVN: r271892
Revise the gccgo version of memory/block/mutex profiling to reduce
runtime overhead. The main change is to collect raw stack traces while
the profile is on line, then post-process the stacks just prior to the
point where we are ready to use the final product. Memory profiling
(at a very low sampling rate) is enabled by default, and the overhead
of the symbolization / DWARF-reading from backtrace_full was slowing
things down relative to the main Go runtime.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/171497
From-SVN: r271172
runtime.throw needs a g to work properly. Set up g early, to
ensure that if something goes wrong in the runtime startup (e.g.
runtime.check fails), the program terminates in a reasonable way.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/176657
From-SVN: r271088
A direct interface is an interface whose data word contains the
actual data value, instead of a pointer to it. The gc toolchain
creates a direct interface if the value is pointer shaped, that
includes pointers (including unsafe.Pointer), functions, channels,
maps, and structs and arrays containing a single pointer-shaped
field. In gccgo, we only do this for pointers. This CL unifies
direct interface types with gc. This reduces allocations when
converting such types to interfaces.
Our method functions used to always take pointer receivers, to
make interface calls easy. Now for direct interface types, their
value methods will take value receivers. For a pointer to those
types, when converted to interface, the interface data contains
the pointer. For that interface to call a value method, it will
need a wrapper method that dereference the pointer and invokes
the value method. The wrapper method, instead of the actual one,
is put into the itable of the pointer type.
In the runtime, adjust funcPC for the new layout of interfaces of
functions.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/168409
From-SVN: r270779
Previously, each time we do an interface conversion for which the
method table is not known at compile time, we allocate a new
method table.
This CL ports the mechanism of itab caching from the gc runtime,
adapted to our itab representation and method finding mechanism.
With the cache, we reuse the same itab for the same (interface,
concrete) type pair. This reduces allocations in interface
conversions.
Unlike the gc runtime, we don't prepopulate the cache with
statically allocated itabs, as currently we don't have a way to
find them. This means we don't deduplicate run-time allocated
itabs with compile-time allocated ones. But that is not too bad
-- it is just a cache anyway.
As now itabs are never freed, it is also possible to drop the
write barrier for writing the first word of an interface header.
I'll leave this optimization for the future.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/171617
From-SVN: r270778
AIX doesn't allow to mmap an address range which is already mmap.
Therefore, once the region has been allocated, it must munmap before
being able to play with it.
The corresponding Go Toolchain patch is CL 174059.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/174138
From-SVN: r270615
In the C calling convention, on AMD64, and probably a number of
other architectures, a 3-word struct argument is passed on stack.
This is less efficient than passing in three registers. Further,
this may affect the code generation in other part of the program,
even if the function is not actually called.
Slices are common in Go and append is a common slice operation,
which calls growslice in the growing path. To improve the code
generation, pass the slice header's three fields as separate
values, instead of a struct, to growslice.
The drawback is that this makes the runtime implementation
slightly diverges from the gc runtime.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/168277
From-SVN: r269811
Since aix/ppc64 has been added to GC toolchain, a mix between new and
old files were created in gcc toolchain.
This commit corrects this merge for aix/ppc64 and aix/ppc.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/167658
From-SVN: r269797
In the runtime there are bad pointer checks that currently don't
work with the concervative collector. With stack maps, the GC is
precise and the checks should work. Enable them.
Reviewed-on: https://go-review.googlesource.com/c/gofrontend/+/153871
From-SVN: r269406
Interpreting auxv as []uintptr is incorrect on 64-bit big-endian,
as auxv alternates a 32-bit int with a 64-bit pointer.
Patch from Rainer Orth.
Reviewed-on: https://go-review.googlesource.com/c/164739
From-SVN: r269315
PR go/89172
internal/cpu, runtime, runtime/pprof: handle function descriptors
When using PPC64 ELF ABI v1 a function address is not a PC, but is the
address of a function descriptor. The first field in the function
descriptor is the actual PC (see
http://refspecs.linuxfoundation.org/ELF/ppc64/PPC-elf64abi.html#FUNC-DES).
The libbacktrace library knows about this, and libgo uses actual PC
values consistently except for the helper function funcPC that appears
in both runtime and runtime/pprof.
This patch fixes funcPC by recording, in the internal/cpu package,
whether function descriptors are being used. We have to check for
function descriptors using a C compiler check, because GCC can be
configured using --with-abi to select the ELF ABI to use.
Fixes https://gcc.gnu.org/PR89172
Reviewed-on: https://go-review.googlesource.com/c/162978
From-SVN: r269266
In signal-triggered stack scan, if the signal is delivered at
certain bad time (e.g. in vdso, or in the middle of setcontext?),
the unwinder may not be able to unwind the whole stack, while it
still reports _URC_END_OF_STACK. So we cannot rely on _URC_END_OF_STACK
to tell if it successfully scanned the stack. Instead, we check
the last Go frame to see it actually reached the end of the stack.
For Go-created stack, this is runtime.kickoff. For C-created
stack, we need to record the outermost Go frame when it enters
the Go side.
Also we cannot unwind the stack if the signal is delivered in the
middle of runtime.gogo, halfway through a goroutine switch, where
the g and the stack don't match. Give up in this case as well.
Reviewed-on: https://go-review.googlesource.com/c/159098
From-SVN: r269018
Compiling with LTO revealed a number of cases in the runtime and
standard library where C and Go disagreed about the type of an object or
function (or where Go and code generated by the compiler disagreed). In
all cases the underlying representation was the same (e.g., uintptr vs.
void*), so this wasn't causing actual problems, but it did result in a
number of annoying warnings when compiling with LTO.
Reviewed-on: https://go-review.googlesource.com/c/160700
From-SVN: r268923
PR go/89168
libgo: change gotest to run examples with output
Change the gotest script to act like "go test" and run examples that
have "output" comments. This is not done with full generality, but
just enough to run the libgo tests. Other packages should be tested
with "go test" as usual.
While we're here clean up some old bits of gotest, and only run
TestXXX functions that are actually in *_test.go files. The latter
change should fix https://gcc.gnu.org/PR89168.
Reviewed-on: https://go-review.googlesource.com/c/162139
From-SVN: r268922
GCC has supported the __atomic intrinsics since 4.7. They are better
than the __sync intrinsics in that they specify a memory model and,
more importantly for our purposes, they are reliably implemented
either in the compiler or in libatomic.
Fixes https://gcc.gnu.org/PR52084
Reviewed-on: https://go-review.googlesource.com/c/160820
From-SVN: r268458