Compilers can materialize renamings of arrays (or of accesses to arrays)
in Ada into variables whose types are references to the actual array
types. Before this change, trying to use such an array renaming yielded
an error in GDB:
(gdb) print my_array(1)
cannot subscript or call a record
(gdb) print my_array_ptr(1)
cannot subscript or call something of type `(null)'
This behavior comes from bad handling for array renamings, in particular
the OP_FUNCALL expression operator handling from ada-lang.c
(ada_evaluate_subexp): in one place we turn the reference into a
pointer, but the code that follows expect the value to be an array.
This patch fixes how we handle references in call/subscript evaluation
so that we turn these references into the actual array values instead of
pointers to them.
gdb/ChangeLog:
* ada-lang.c (ada_evaluate_subexp) <OP_FUNCALL>: When the input
value is a reference, actually dereference it in order to get
the underlying value.
gdb/testsuite/ChangeLog:
* gdb.ada/array_ptr_renaming.exp: New testcase.
* gdb.ada/array_ptr_renaming/foo.adb: New file.
* gdb.ada/array_ptr_renaming/pack.ads: New file.
Tested on x86_64-linux, no regression.
binutils PR binutils/15835
* readelf.c (struct elf_section_list): New structure.
(symtab_shndx_hdr): Replace with symtab_shndx_list.
(get_32bit_elf_symbols): Scan for a symbol index table matching
the symbol table in use.
(get_64bit_elf_symbols): Likewise.
(process_section_headers): Handle multiple symbol index sections.
bfd * elf-bfd.h (struct elf_section_list): New structure.
(struct elf_obj_tdata): Replace symtab_shndx_hdr with
symtab_shndx_list. Delete symtab_shndx_section.
(elf_symtab_shndx): Replace macro with elf_symtab_shndx_list.
* elf.c (bfd_elf_get_syms): If symtab index sections are present,
scan them for the section that matches the provided symbol table.
(bfd_section_from_shdr): Record all SHT_SYMTAB_SHNDX sections.
(assign_section_numbers): Use the first symtab index table in the
list.
(_bfd_elf_compute_section_file_positions): Replace use of
symtab_shndx_hdr with use of symtab_shndx_list.
(find_section_in_list): New function.
(assign_file_postions_except_relocs): Use new function.
(_bfd_elf_copy_private_symbol_data): Likewise.
(swap_out_syms): Handle multiple symbol table index sections.
* elf32-m32c.c (m32c_elf_relax_section): Replace use of
symtab_shndx_hdr with use of symtab_shndx_list.
* elf32-rl78.c (rl78_elf_relax_section): Likewise.
* elf32-rx.c (rx_relax_section): Likewise.
* elf32-v850.c (v850_elf_relax_delete_bytes): Likewise.
* elflink.c (bfd_elf_final_link): Likewise.
The FT32 simulator has character output, of course. This patch
adds character input, which lets the simulator run interactive
FT32 applications, e.g. language interpreters.
Since linker now sets the DF_1_PIE bit in the DT_FLAGS_1 tag for PIE,
we need to update MIPS PIE tests for it.
* ld-mips-elf/pie-n32.d: Updated.
* ld-mips-elf/pie-n64.d: Likewise.
* ld-mips-elf/pie-o32.d: Likewise.
ret->args_u.text is const char *, probe_args is const char *, so no cast
is needed. Found while doing cxx-conversion stuff, since it wouldn't
build in C++.
gdb/ChangeLog:
* stap-probe.c (handle_stap_probe): Remove unnecessary cast.
The ch_type field in Elf64_External_Chdr is 4 bytes. We should use
bfd_get_32 and bfd_put_32 to access it.
* bfd.c (bfd_update_compression_header): Use bfd_put_32 on
ch_type.
(bfd_check_compression_header): Use bfd_get_32 on ch_type.
(bfd_convert_section_contents): Use bfd_get_32 and bfd_put_32
on ch_type.
Two missing consts, found while doing cxx-conversion work. We end up
with a char*, even though we pass a const char* to strstr. I am pushing
this as obvious.
gdb/ChangeLog:
* cli/cli-setshow.c (cmd_show_list): Constify a variable.
* linespec.c (linespec_lexer_lex_string): Same.
The ch_type field in Elf64_External_Chdr is 4 bytes, followed by a
4-byte padding. This change doesn't introduce any functional change
since only the lower 32 bits of the ch_type field are used.
* external.h (Elf64_External_Chdr): Change ch_type to 4 bytes
and add ch_reserved.
When installing a fast tracepoint, we create a jump pad with a
spin-lock. This way, only one thread can collect a given tracepoint at
any time. This test case checks that this lock actually works as
expected.
This test works by creating a function which overrides the in-process
agent library's gdb_collect function. On start up, GDBserver will ask
GDB with the 'qSymbol' packet about symbols present in the inferior.
GDB will reply with the gdb_agent_gdb_collect function from the test
case instead of the one from the agent.
gdb/testsuite/ChangeLog:
* gdb.trace/ftrace-lock.c: New file.
* gdb.trace/ftrace-lock.exp: New file.
This test case makes sure that relocating PC relative instructions does
not change their behaviors. All PC relative AArch64 instructions are
covered. While call and jump (32 bit relative) instructions are covered
on x86.
The test case creates a static array of function pointers for each
supported architecture. Each function in this array tests a specific
instruction using inline assembly. They all need to contain a symbol in
the form of 'set_point\[0-9\]+' and finish by either calling pass or
fail. The number of 'set_pointN' needs to go from 0 to
(ARRAY_SIZE - 1).
The test will:
- look up the number of function pointers in the static array.
- set fast tracepoints on each 'set_point\[0-9\]+' symbol, one in each
functions from 0 to (ARRAY_SIZE - 1).
- run the trace experiment and make sure the pass function is called for
every function.
gdb/testsuite/ChangeLog:
* gdb.arch/insn-reloc.c: New file.
* gdb.arch/ftrace-insn-reloc.exp: New file.
This patch implements compiling agent expressions to native code for
AArch64. This allows us to compile conditions set on fast tracepoints.
The compiled function has the following prologue:
High *------------------------------------------------------*
| LR |
| FP | <- FP
| x1 (ULONGEST *value) |
| x0 (unsigned char *regs) |
Low *------------------------------------------------------*
We save the function's argument on the stack as well as the return
address and the frame pointer. We then set the current frame pointer to
point to the previous one.
The generated code for the expression will freely update the stack
pointer so we use the frame pointer to refer to `*value' and `*regs'.
`*value' needs to be accessed in the epilogue of the function, in order
to set it to whatever is on top of the stack. `*regs' needs to be passed
down to the `gdb_agent_get_raw_reg' function with the `reg' operation.
gdb/gdbserver/ChangeLog:
* linux-aarch64-low-.c: Include ax.h and tracepoint.h.
(enum aarch64_opcodes) <RET>, <SUBS>, <AND>, <ORR>, <ORN>,
<EOR>, <LSLV>, <LSRV>, <ASRV>, <SBFM>, <UBFM>, <CSINC>, <MUL>,
<NOP>: New.
(enum aarch64_condition_codes): New enum.
(w0): New static global.
(fp): Likewise.
(lr): Likewise.
(struct aarch64_memory_operand) <type>: New
MEMORY_OPERAND_POSTINDEX type.
(postindex_memory_operand): New helper function.
(emit_ret): New function.
(emit_load_store_pair): New function, factored out of emit_stp
with support for MEMORY_OPERAND_POSTINDEX.
(emit_stp): Rewrite using emit_load_store_pair.
(emit_ldp): New function.
(emit_load_store): Likewise.
(emit_ldr): Mention post-index instruction in comment.
(emit_ldrh): New function.
(emit_ldrb): New function.
(emit_ldrsw): Mention post-index instruction in comment.
(emit_str): Likewise.
(emit_subs): New function.
(emit_cmp): Likewise.
(emit_and): Likewise.
(emit_orr): Likewise.
(emit_orn): Likewise.
(emit_eor): Likewise.
(emit_mvn): Likewise.
(emit_lslv): Likewise.
(emit_lsrv): Likewise.
(emit_asrv): Likewise.
(emit_mul): Likewise.
(emit_sbfm): Likewise.
(emit_sbfx): Likewise.
(emit_ubfm): Likewise.
(emit_ubfx): Likewise.
(emit_csinc): Likewise.
(emit_cset): Likewise.
(emit_nop): Likewise.
(emit_ops_insns): New helper function.
(emit_pop): Likewise.
(emit_push): Likewise.
(aarch64_emit_prologue): New function.
(aarch64_emit_epilogue): Likewise.
(aarch64_emit_add): Likewise.
(aarch64_emit_sub): Likewise.
(aarch64_emit_mul): Likewise.
(aarch64_emit_lsh): Likewise.
(aarch64_emit_rsh_signed): Likewise.
(aarch64_emit_rsh_unsigned): Likewise.
(aarch64_emit_ext): Likewise.
(aarch64_emit_log_not): Likewise.
(aarch64_emit_bit_and): Likewise.
(aarch64_emit_bit_or): Likewise.
(aarch64_emit_bit_xor): Likewise.
(aarch64_emit_bit_not): Likewise.
(aarch64_emit_equal): Likewise.
(aarch64_emit_less_signed): Likewise.
(aarch64_emit_less_unsigned): Likewise.
(aarch64_emit_ref): Likewise.
(aarch64_emit_if_goto): Likewise.
(aarch64_emit_goto): Likewise.
(aarch64_write_goto_address): Likewise.
(aarch64_emit_const): Likewise.
(aarch64_emit_call): Likewise.
(aarch64_emit_reg): Likewise.
(aarch64_emit_pop): Likewise.
(aarch64_emit_stack_flush): Likewise.
(aarch64_emit_zero_ext): Likewise.
(aarch64_emit_swap): Likewise.
(aarch64_emit_stack_adjust): Likewise.
(aarch64_emit_int_call_1): Likewise.
(aarch64_emit_void_call_2): Likewise.
(aarch64_emit_eq_goto): Likewise.
(aarch64_emit_ne_goto): Likewise.
(aarch64_emit_lt_goto): Likewise.
(aarch64_emit_le_goto): Likewise.
(aarch64_emit_gt_goto): Likewise.
(aarch64_emit_ge_got): Likewise.
(aarch64_emit_ops_impl): New static global variable.
(aarch64_emit_ops): New target function, return
&aarch64_emit_ops_impl.
(struct linux_target_ops): Install it.
This patch adds support for fast tracepoints for aarch64-linux. With this
implementation, a tracepoint can only be placed in a +/- 128MB range of
the jump pad. This is due to the unconditional branch instruction
being limited to a (26 bit << 2) offset from the current PC.
Three target operations are implemented:
- target_install_fast_tracepoint_jump_pad
Building the jump pad the biggest change of this patch. We need to add
functions to emit all instructions needed to save and restore the
current state when the tracepoint is hit. As well as implementing a
lock and creating a collecting_t object identifying the current thread.
Steps performed by the jump pad:
* Save the current state on the stack.
* Push a collecting_t object on the stack. We read the special
tpidr_el0 system register to get the thread ID.
* Spin-lock on the shared memory location of all tracing threads. We
write the address of our collecting_t object there once we have the
lock.
* Call gdb_collect.
* Release the lock.
* Restore the state.
* Execute the replaced instruction which will have been relocated.
* Jump back to the program.
- target_get_thread_area
As implemented in ps_get_thread_area, target_get_thread_area uses ptrace
to fetch the NT_ARM_TLS register. At the architecture level, NT_ARM_TLS
represents the tpidr_el0 system register.
So this ptrace call (if lwpid is the current thread):
~~~
ptrace (PTRACE_GETREGSET, lwpid, NT_ARM_TLS, &iovec);
~~~
Is equivalent to the following instruction:
~~~
msr x0, tpidr_el0
~~~
This instruction is used when creating the collecting_t object that
GDBserver can read to know if a given thread is currently tracing.
So target_get_thread_area must get the same thread IDs as what the jump
pad writes into its collecting_t object.
- target_get_min_fast_tracepoint_insn_len
This just returns 4.
gdb/gdbserver/ChangeLog:
* Makefile.in (linux-aarch64-ipa.o, aarch64-ipa.o): New rules.
* configure.srv (aarch64*-*-linux*): Add linux-aarch64-ipa.o and
aarch64-ipa.o.
* linux-aarch64-ipa.c: New file.
* linux-aarch64-low.c: Include arch/aarch64-insn.h, inttypes.h
and endian.h.
(aarch64_get_thread_area): New target method.
(extract_signed_bitfield): New helper function.
(aarch64_decode_ldr_literal): New function.
(enum aarch64_opcodes): New enum.
(struct aarch64_register): New struct.
(struct aarch64_operand): New struct.
(x0): New static global.
(x1): Likewise.
(x2): Likewise.
(x3): Likewise.
(x4): Likewise.
(w2): Likewise.
(ip0): Likewise.
(sp): Likewise.
(xzr): Likewise.
(aarch64_register): New helper function.
(register_operand): Likewise.
(immediate_operand): Likewise.
(struct aarch64_memory_operand): New struct.
(offset_memory_operand): New helper function.
(preindex_memory_operand): Likewise.
(enum aarch64_system_control_registers): New enum.
(ENCODE): New macro.
(emit_insn): New helper function.
(emit_b): New function.
(emit_bcond): Likewise.
(emit_cb): Likewise.
(emit_tb): Likewise.
(emit_blr): Likewise.
(emit_stp): Likewise.
(emit_ldp_q_offset): Likewise.
(emit_stp_q_offset): Likewise.
(emit_load_store): Likewise.
(emit_ldr): Likewise.
(emit_ldrsw): Likewise.
(emit_str): Likewise.
(emit_ldaxr): Likewise.
(emit_stxr): Likewise.
(emit_stlr): Likewise.
(emit_data_processing_reg): Likewise.
(emit_data_processing): Likewise.
(emit_add): Likewise.
(emit_sub): Likewise.
(emit_mov): Likewise.
(emit_movk): Likewise.
(emit_mov_addr): Likewise.
(emit_mrs): Likewise.
(emit_msr): Likewise.
(emit_sevl): Likewise.
(emit_wfe): Likewise.
(append_insns): Likewise.
(can_encode_int32_in): New helper function.
(aarch64_relocate_instruction): New function.
(aarch64_install_fast_tracepoint_jump_pad): Likewise.
(aarch64_get_min_fast_tracepoint_insn_len): Likewise.
(struct linux_target_ops): Install aarch64_get_thread_area,
aarch64_install_fast_tracepoint_jump_pad and
aarch64_get_min_fast_tracepoint_insn_len.
We will need to decode both ADR and ADRP instructions in GDBserver.
This patch makes common code handle both cases, even if GDB only needs
to decode the ADRP instruction.
gdb/ChangeLog:
* aarch64-tdep.c (aarch64_analyze_prologue): New is_adrp
variable. Call aarch64_decode_adr instead of
aarch64_decode_adrp.
* arch/aarch64-insn.h (aarch64_decode_adrp): Delete.
(aarch64_decode_adr): New function declaration.
* arch/aarch64-insn.c (aarch64_decode_adrp): Delete.
(aarch64_decode_adr): New function, factored out from
aarch64_decode_adrp to decode both adr and adrp instructions.
This patch moves the following functions into the arch/ common
directory, in new files arch/aarch64-insn.{h,c}. They are prefixed with
'aarch64_':
- aarch64_decode_adrp
- aarch64_decode_b
- aarch64_decode_cb
- aarch64_decode_tb
We will need them to implement fast tracepoints in GDBserver.
For consistency, this patch also adds the 'aarch64_' prefix to static
decoding functions that do not need to be shared right now.
V2:
make sure the formatting issues propagated
fix `gdbserver/configure.srv'.
gdb/ChangeLog:
* Makefile.in (ALL_64_TARGET_OBS): Add aarch64-insn.o.
(HFILES_NO_SRCDIR): Add arch/aarch64-insn.h.
(aarch64-insn.o): New rule.
* configure.tgt (aarch64*-*-elf): Add aarch64-insn.o.
(aarch64*-*-linux*): Likewise.
* arch/aarch64-insn.c: New file.
* arch/aarch64-insn.h: New file.
* aarch64-tdep.c: Include arch/aarch64-insn.h.
(aarch64_debug): Move to arch/aarch64-insn.c. Declare in
arch/aarch64-insn.h.
(decode_add_sub_imm): Rename to ...
(aarch64_decode_add_sub_imm): ... this.
(decode_adrp): Rename to ...
(aarch64_decode_adrp): ... this. Move to arch/aarch64-insn.c.
Declare in arch/aarch64-insn.h.
(decode_b): Rename to ...
(aarch64_decode_b): ... this. Move to arch/aarch64-insn.c.
Declare in arch/aarch64-insn.h.
(decode_bcond): Rename to ...
(aarch64_decode_bcond): ... this. Move to arch/aarch64-insn.c.
Declare in arch/aarch64-insn.h.
(decode_br): Rename to ...
(aarch64_decode_br): ... this.
(decode_cb): Rename to ...
(aarch64_decode_cb): ... this. Move to arch/aarch64-insn.c.
Declare in arch/aarch64-insn.h.
(decode_eret): Rename to ...
(aarch64_decode_eret): ... this.
(decode_movz): Rename to ...
(aarch64_decode_movz): ... this.
(decode_orr_shifted_register_x): Rename to ...
(aarch64_decode_orr_shifted_register_x): ... this.
(decode_ret): Rename to ...
(aarch64_decode_ret): ... this.
(decode_stp_offset): Rename to ...
(aarch64_decode_stp_offset): ... this.
(decode_stp_offset_wb): Rename to ...
(aarch64_decode_stp_offset_wb): ... this.
(decode_stur): Rename to ...
(aarch64_decode_stur): ... this.
(decode_tb): Rename to ...
(aarch64_decode_tb): ... this. Move to arch/aarch64-insn.c.
Declare in arch/aarch64-insn.h.
(aarch64_analyze_prologue): Adjust calls to renamed functions.
gdb/gdbserver/ChangeLog:
* Makefile.in (aarch64-insn.o): New rule.
* configure.srv (aarch64*-*-linux*): Add aarch64-insn.o.
Hi,
I see the following build warning with recent GCC built from mainline,
aarch64-none-linux-gnu-gcc -g -O2 -I. -I/home/yao/SourceCode/gnu/gdb/git/gdb/gdbserver -I/home/yao/SourceCode/gnu/gdb/git/gdb/gdbserver/../common -I/home/yao/SourceCode/gnu/gdb/git/gdb/gdbserver/../regformats -I/home/yao/SourceCode/gnu/gdb/git/gdb/gdbserver/.. -I/home/yao/SourceCode/gnu/gdb/git/gdb/gdbserver/../../include -I/home/yao/SourceCode/gnu/gdb/git/gdb/gdbserver/../gnulib/import -Ibuild-gnulib-gdbserver/import -Wall -Wpointer-arith -Wformat-nonliteral -Wno-char-subscripts -Wempty-body -Wdeclaration-after-statement -Werror -DGDBSERVER -DCONFIG_UST_GDB_INTEGRATION -fPIC -DIN_PROCESS_AGENT -fvisibility=hidden -c -o ax-ipa.o -MT ax-ipa.o -MMD -MP -MF .deps/ax-ipa.Tpo `echo " -Wall -Wpointer-arith -Wformat-nonliteral -Wno-char-subscripts -Wempty-body -Wdeclaration-after-statement " | sed "s/ -Wformat-nonliteral / -Wno-format-nonliteral /g"` /home/yao/SourceCode/gnu/gdb/git/gdb/gdbserver/ax.c
/home/yao/SourceCode/gnu/gdb/git/gdb/gdbserver/ax.c:73:28: error: 'gdb_agent_op_sizes' defined but not used [-Werror=unused-const-variable]
static const unsigned char gdb_agent_op_sizes [gdb_agent_op_last] =
^
cc1: all warnings being treated as errors
gdb_agent_op_sizes is only used in function is_goto_target, which is
defined inside #ifndef IN_PROCESS_AGENT. This warning is not arch
specific, so GCC mainline for other targets should produce this warning
too, although this warning is triggered by enabling aarch64 fast
tracepoint. The fix is to move gdb_agent_op_sizes to
gdb/gdbserver:
2015-09-21 Yao Qi <yao.qi@linaro.org>
* ax.c [!IN_PROCESS_AGENT] (gdb_agent_op_sizes): Define it.
This patch is to remove max_jump_pad_size which isn't used else where,
and it causes a recent gcc warning like this,
gdb/gdbserver/tracepoint.c:2920:18: error: 'max_jump_pad_size' defined but not used [-Werror=unused-const-variable]
static const int max_jump_pad_size = 0x100;
^
cc1: all warnings being treated as errors
This variable max_jump_pad_size wasn't used since it was added in 2010
by https://sourceware.org/ml/gdb-patches/2010-06/msg00002.html
gdb/gdbserver:
2015-09-21 Yao Qi <yao.qi@linaro.org>
* tracepoint.c (max_jump_pad_size): Remove.
We have noticed that GDB would sometimes crash trying to print
from a nested function the value of a variable declared in an
enclosing scope. This appears to be target dependent, although
that correlation might only be fortuitious. We noticed the issue
on x86_64-darwin, x86-vxworks6 and x86-solaris. The investigation
was done on Darwin.
This is a new feature that was introduced by:
commit 63e43d3aed
Date: Thu Feb 5 17:00:06 2015 +0100
DWARF: handle non-local references in nested functions
We can reproduce the problem with one of the testcases that was
added with the patch (gdb.base/nested-subp1.exp), where we have...
18 int
19 foo (int i1)
20 {
21 int
22 nested (int i2)
23 {
[...]
27 return i1 * i2; /* STOP */
28 }
... After building the example program, and running until line 27,
try printing the value of "i1":
% gdb gdb.base/nested-subp1
(gdb) break foo.c:27
(gdb) run
Breakpoint 1, nested (i2=2) at /[...]/nested-subp1.c:27
27 return i1 * i2; /* STOP */
(gdb) p i1
[1] 73090 segmentation fault ../gdb -q gdb.base/nested-subp1
Ooops!
What happens is that, because the reference is non-local, we are trying
to follow the function's static link, which does...
/* If we don't know how to compute FRAME's base address, don't give up:
maybe the frame we are looking for is upper in the stace frame. */
if (framefunc != NULL
&& SYMBOL_BLOCK_OPS (framefunc)->get_frame_base != NULL
&& (SYMBOL_BLOCK_OPS (framefunc)->get_frame_base (framefunc, frame)
== upper_frame_base))
... or, in other words, calls the get_frame_base "method" of
framefunc's struct symbol_block_ops data. This resolves to
the block_op_get_frame_base function.
Looking at the function's implementation, we see:
struct dwarf2_locexpr_baton *dlbaton;
[...]
dlbaton = SYMBOL_LOCATION_BATON (framefunc);
[...]
result = dwarf2_evaluate_loc_desc (type, frame, start, length,
dlbaton->per_cu);
^^^^^^^^^^^^^^^
Printing dlbaton->per_cu gives a value that seems fairly bogus for
a memory address (0x60). Because of it, dwarf2_evaluate_loc_desc
then crashes trying to dereference it.
What's different on Darwin compared to Linux is that the function's
frame base is encoded using the following form:
.byte 0x40 # uleb128 0x40; (DW_AT_frame_base)
.byte 0x6 # uleb128 0x6; (DW_FORM_data4)
... and so dwarf2_symbol_mark_computed ends up creating
a SYMBOL_LOCATION_BATON as a struct dwarf2_loclist_baton:
if (attr_form_is_section_offset (attr)
/* .debug_loc{,.dwo} may not exist at all, or the offset may be outside
the section. If so, fall through to the complaint in the
other branch. */
&& DW_UNSND (attr) < dwarf2_section_size (objfile, section))
{
struct dwarf2_loclist_baton *baton;
[...]
SYMBOL_LOCATION_BATON (sym) = baton;
However, if you look more closely at block_op_get_frame_base's
implementation, you'll notice that the function extracts the
symbol's SYMBOL_LOCATION_BATON as a dwarf2_locexpr_baton
(a DWARF _expression_ rather than a _location list_).
That's why we end up decoding the DLBATON improperly, and thus
pass a random dlbaton->per_cu when calling dwarf2_evaluate_loc_desc.
This works on x86_64-linux, because we indeed have the frame base
described using a different form:
.uleb128 0x40 # (DW_AT_frame_base)
.uleb128 0x18 # (DW_FORM_exprloc)
This patch fixes the issue by doing what we do for most (if not all)
other such methods: providing one implementation each for loc-list,
and loc-expr. Both implementations are nearly identical, so perhaps
we might later want to improve this. But this patch first tries to
fix the crash first, leaving the design issue for later.
gdb/ChangeLog:
* dwarf2loc.c (locexpr_get_frame_base): Renames
block_op_get_frame_base.
(dwarf2_block_frame_base_locexpr_funcs): Replace reference to
block_op_get_frame_base by reference to locexpr_get_frame_base.
(loclist_get_frame_base): New function, near identical copy of
locexpr_get_frame_base.
(dwarf2_block_frame_base_loclist_funcs): Replace reference to
block_op_get_frame_base by reference to loclist_get_frame_base.
Tested on x86_64-darwin (AdaCore testsuite), and x86_64-linux
(official testsuite).
bfd/ChangeLog:
* targets.c (enum bfd_flavour): Add comment.
(bfd_flavour_name): New function.
* bfd-in2.h: Regenerate.
gdb/ChangeLog:
* findvar.c (default_read_var_value) <LOC_UNRESOLVED>: Include the
kind of minimal symbol in the error message.
* objfiles.c (objfile_flavour_name): New function.
* objfiles.h (objfile_flavour_name): Declare.
gdb/testsuite/ChangeLog:
* gdb.dwarf2/dw2-bad-unresolved.c: New file.
* gdb.dwarf2/dw2-bad-unresolved.exp: New file.
2015-09-18 Sandra Loosemore <sandra@codesourcery.com>
gdb/testsuite/
* gdb.mi/mi-pending.exp: Don't use directory prefix when setting
the pending breakpoint. Remove timeout override for "Run till
MI pending breakpoint on pendfunc3 on thread 2" test.
With the kernle fix <http://lists.infradead.org/pipermail/linux-arm-kernel/2015-July/356511.html>,
aarch64 GDB is able to read the base of thread area of 32-bit arm
program through NT_ARM_TLS.
This patch is to teach both GDB and GDBserver to read the base of
thread area correctly in the multi-arch case. A new function
aarch64_ps_get_thread_area is added, and is shared between GDB and
GDBserver.
With this patch applied, the following fails in multi-arch testing
(GDB is aarch64 but the test cases are arm) are fixed,
-FAIL: gdb.threads/tls-nodebug.exp: thread local storage
-FAIL: gdb.threads/tls-shared.exp: print thread local storage variable
-FAIL: gdb.threads/tls-so_extern.exp: print thread local storage variable
-FAIL: gdb.threads/tls-var.exp: print tls_var
-FAIL: gdb.threads/tls.exp: first thread local storage
-FAIL: gdb.threads/tls.exp: first another thread local storage
-FAIL: gdb.threads/tls.exp: p a_thread_local
-FAIL: gdb.threads/tls.exp: p file2_thread_local
-FAIL: gdb.threads/tls.exp: p a_thread_local second time
gdb:
2015-09-18 Yao Qi <yao.qi@linaro.org>
* nat/aarch64-linux.c: Include elf/common.h,
nat/gdb_ptrace.h, asm/ptrace.h and sys/uio.h.
(aarch64_ps_get_thread_area): New function.
* nat/aarch64-linux.h: Include gdb_proc_service.h.
(aarch64_ps_get_thread_area): Declare.
* aarch64-linux-nat.c (ps_get_thread_area): Call
aarch64_ps_get_thread_area.
gdb/gdbserver:
2015-09-18 Yao Qi <yao.qi@linaro.org>
* linux-aarch64-low.c: Don't include sys/uio.h.
(ps_get_thread_area): Call aarch64_ps_get_thread_area.
In all-stop mode, record btrace maintains the old behaviour of an implicit
scheduler-locking on.
Now that we added a scheduler-locking mode to model this old behaviour, we
don't need the respective code in record btrace anymore. Remove it.
For all-stop targets, step inferior_ptid and continue other threads matching
the argument ptid. Assert that inferior_ptid matches the argument ptid.
This should make record btrace honour scheduler-locking.
gdb/
* record-btrace.c (record_btrace_resume): Honour scheduler-locking.
testsuite/
* gdb.btrace/multi-thread-step.exp: Test scheduler-locking on, step,
and replay.
Record targets behave as if scheduler-locking were on in replay mode. Add a
new scheduler-locking option "replay" to make this implicit behaviour explicit.
It behaves like "on" in replay mode and like "off" in record mode.
By making the current behaviour a scheduler-locking option, we allow the user
to change it. Since it is the current behaviour, this new option is also
the new default.
One caveat is that when resuming a thread that is at the end of its execution
history, record btrace implicitly stops replaying other threads and resumes
the entire process. This is a convenience feature to not require the user
to explicitly move all other threads to the end of their execution histories
before being able to resume the process.
We mimick this behaviour with scheduler-locking replay and move it from
record-btrace into infrun. With all-stop on top of non-stop, we can't do
this in record-btrace anymore.
Record full does not really support multi-threading and is therefore not
impacted. If it were extended to support multi-threading, it would 'benefit'
from this change. The good thing is that all record targets will behave the
same with respect to scheduler-locking.
I put the code for this into clear_proceed_status. It also sends the
about_to_proceed notification.
gdb/
* NEWS: Announce new scheduler-locking mode.
* infrun.c (schedlock_replay): New.
(scheduler_enums): Add schedlock_replay.
(scheduler_mode): Change default to schedlock_replay.
(user_visible_resume_ptid): Handle schedlock_replay.
(clear_proceed_status_thread): Stop replaying if resumed thread is
not replaying.
(schedlock_applies): Handle schedlock_replay.
(_initialize_infrun): Document new scheduler-locking mode.
* record-btrace.c (record_btrace_resume): Remove code to stop other
threads when not replaying the resumed thread.
doc/
* gdb.texinfo (All-Stop Mode): Describe new scheduler-locking mode.
Add a new target method to_record_will_replay to query if there is a record
target that will replay at least one thread matching the argument PTID if it
were executed in the argument execution direction.
gdb/
* record-btrace.c ((record_btrace_will_replay): New.
(init_record_btrace_ops): Initialize to_record_will_replay.
* record-full.c ((record_full_will_replay): New.
(init_record_full_ops): Initialize to_record_will_replay.
* target-delegates.c: Regenerated.
* target.c (target_record_will_replay): New.
* target.h (struct target_ops) <to_record_will_replay>: New.
(target_record_will_replay): New.
Signed-off-by: Markus Metzger <markus.t.metzger@intel.com>