Function set_register_cache was removed by 3aee891821
([GDBserver] Multi-process + multi-arch), so this patch removes the
declaration too.
gdb:
2017-06-06 Yao Qi <yao.qi@linaro.org>
* regformats/regdef.h (set_register_cache): Remove the
declaration.
The problem is that b->extra_string is free'ed twice: Once in the
breakpoint's dtor, and another time via make_cleanup (xfree).
This patch gets rid of the cleanups, fixing the problem.
Tested on x86_64 GNU/Linux.
gdb/ChangeLog:
2017-06-06 Pedro Alves <palves@redhat.com>
PR breakpoints/21553
* breakpoint.c (create_breakpoints_sal_default)
(init_breakpoint_sal, create_breakpoint_sal): Use
gdb::unique_xmalloc_ptr for string parameters.
(create_breakpoint): Constify 'extra_string' and 'cond_string'
parameters. Replace cleanups with gdb::unique_xmalloc_ptr.
(base_breakpoint_create_breakpoints_sal)
(bkpt_create_breakpoints_sal, tracepoint_create_breakpoints_sal)
(strace_marker_create_breakpoints_sal)
(create_breakpoints_sal_default): Use gdb::unique_xmalloc_ptr for
string parameters.
* breakpoint.h (breakpoint_ops::create_breakpoints_sal): Use
gdb::unique_xmalloc_ptr for string parameters.
(create_breakpoint): Constify 'extra_string' and 'cond_string'
parameters.
For Thumb mode, since ARMv8-A, REG_SP is allowed in most of the places in
Rd/Rt/Rt2 etc while it was disallowed before ARMv8-A, and was rejected through
the "reject_bad_reg" macro and several scattered checks.
This patch only rejects REG_SP in "reject_bad_reg" and several related places
for legacy architectures before ARMv8-A. I have checked those affected instructions
, all of them qualify such relaxations.
Testcases adjusted accordingly.
* ld-sp-warn.d was written without .arch and without -march options passed.
By default it assumes all architectures, so I deleted the REG_SP warning
on ldrsb as it's supported on ARMv8-A. There are actually quite a few
seperate tests on other architectures, for example ld-sp-warn-v7.l etc.,
so there the test for ldrsb on legacy architectures are still covered.
* sp-pc-validations-bad-t has been extended to armv8-a.
* strex-bad-t.d restricted on armv7-a.
* Some new tests for REG_SP used as Rd/Rt etc added in sp-usage-thumb2-relax*.
gas/
* config/tc-arm.c (reject_bad_reg): Allow REG_SP on ARMv8-A.
(parse_operands): Allow REG_SP for OP_oRRnpcsp and OP_RRnpcsp on
ARMv8-A.
(do_co_reg): Allow REG_SP for Rd on ARMv8-A.
(do_t_add_sub): Likewise.
(do_t_mov_cmp): Likewise.
(do_t_tb): Likewise.
* testsuite/gas/arm/ld-sp-warn.l: Delete the warning on REG_SP as Rt for
ldrsb.
* testsuite/gas/arm/sp-pc-validations-bad-t-v8a.d: New test.
* testsuite/gas/arm/sp-pc-validations-bad-t-v8a.l: New test.
* testsuite/gas/arm/sp-pc-validations-bad-t.d: Specifies -march=armv7-a.
* testsuite/gas/arm/sp-pc-validations-bad-t.s: Remove ".arch armv7-a".
* testsuite/gas/arm/sp-usage-thumb2-relax-on-v7.d: New test.
* testsuite/gas/arm/sp-usage-thumb2-relax-on-v7.l: New test.
* testsuite/gas/arm/sp-usage-thumb2-relax-on-v8.d: New test.
* testsuite/gas/arm/sp-usage-thumb2-relax.s: New test.
* testsuite/gas/arm/strex-bad-t.d: Specifies -march=armv7-a.
This commit adds a new linker feature: the ability to resolve section
groups as part of a relocatable link.
Currently section groups are automatically resolved when performing a
final link, and are carried through when performing a relocatable link.
By carried through this means that one copy of each section group (from
all the copies that might be found in all the input files) is placed
into the output file. Sections that are part of a section group will
not match input section specifiers within a linker script and are
forcibly kept as separate sections.
There is a slight resemblance between section groups and common
section. Like section groups, common sections are carried through when
performing a relocatable link, and resolved (allocated actual space)
only at final link time.
However, with common sections there is an ability to force the linker to
allocate space for the common sections when performing a relocatable
link, there's currently no such ability for section groups.
This commit adds such a mechanism. This new facility can be accessed in
two ways, first there's a command line switch --force-group-allocation,
second, there's a new linker script command FORCE_GROUP_ALLOCATION. If
one of these is used when performing a relocatable link then the linker
will resolve the section groups as though it were performing a final
link, the section group will be deleted, and the members of the group
will be placed like normal input sections. If there are multiple copies
of the group (from multiple input files) then only one copy of the group
members will be placed, the duplicate copies will be discarded.
Unlike common sections that have the --no-define-common command line
flag, and INHIBIT_COMMON_ALLOCATION linker script command there is no
way to prevent group resolution during a final link, this is because the
ELF gABI specifically prohibits the presence of SHT_GROUP sections in a
fully linked executable. However, the code as written should make
adding such a feature trivial, setting the new resolve_section_groups
flag to false during a final link should work as you'd expect.
bfd/ChangeLog:
* elf.c (_bfd_elf_make_section_from_shdr): Don't initially mark
SEC_GROUP sections as SEC_EXCLUDE.
(bfd_elf_set_group_contents): Replace use of abort with an assert.
(assign_section_numbers): Use resolve_section_groups flag instead
of relocatable link type.
(_bfd_elf_init_private_section_data): Use resolve_section_groups
flag instead of checking the final_link flag for part of the
checks in here. Fix white space as a result.
* elflink.c (elf_link_input_bfd): Use resolve_section_groups flag
instead of relocatable link type.
(bfd_elf_final_link): Likewise.
include/ChangeLog:
* bfdlink.h (struct bfd_link_info): Add new resolve_section_groups
flag.
ld/ChangeLog:
* ld.h (struct args_type): Add force_group_allocation field.
* ldgram.y: Add support for FORCE_GROUP_ALLOCATION.
* ldlex.h: Likewise.
* ldlex.l: Likewise.
* lexsup.c: Likewise.
* ldlang.c (unique_section_p): Check resolve_section_groups flag
not the relaxable link flag.
(lang_add_section): Discard section groups when we're resolving
groups. Clear the SEC_LINK_ONCE flag if we're resolving section
groups.
* ldmain.c (main): Initialise resolve_section_groups flag in
link_info based on command line flags.
* testsuite/ld-elf/group11.d: New file.
* testsuite/ld-elf/group12.d: New file.
* testsuite/ld-elf/group12.ld: New file.
* NEWS: Mention new features.
* ld.texinfo (Options): Document --force-group-allocation.
(Miscellaneous Commands): Document FORCE_GROUP_ALLOCATION.
Correct a commit e5713223cb ("MIPS/BFD: For n64 hold the number of
internal relocs in `->reloc_count'") regression and change internal
relocation handling in the generic ELF BFD linker code such that, except
in the presence of R_SPARC_OLO10 relocations, a section's `reloc_count'
holds the number of internal rather than external relocations, making
the handling more consistent between GAS, which sets `->reloc_count'
with a call to `bfd_set_reloc', and LD, which sets `->reloc_count' as it
reads input sections.
The handling of dynamic relocations remains unchanged and they continue
holding the number of external relocations in `->reloc_count'; they are
also not converted to the internal form except in `elf_link_sort_relocs'
(which does not handle the general, i.e. non-n64-MIPS case of composed
relocations correctly as per the ELF gABI, though it does not seem to
matter for the targets we currently support).
The n64 MIPS backend is the only one with `int_rels_per_ext_rel' set to
non-one, and consequently the change is trivial for all the remaining
backends and targets.
bfd/
* elf-bfd.h (RELOC_AGAINST_DISCARDED_SECTION): Subtract `count'
from `reloc_count' rather than decrementing it.
* elf.c (bfd_section_from_shdr): Multiply the adjustment to
`reloc_count' by `int_rels_per_ext_rel'.
* elf32-score.c (score_elf_final_link_relocate): Do not multiply
`reloc_count' by `int_rels_per_ext_rel' for last relocation
entry determination.
(s3_bfd_score_elf_check_relocs): Likewise.
* elf32-score7.c (score_elf_final_link_relocate): Likewise.
(s7_bfd_score_elf_relocate_section): Likewise.
(s7_bfd_score_elf_check_relocs): Likewise.
* elf64-mips.c (mips_elf64_get_reloc_upper_bound): Remove
prototype and function.
(mips_elf64_slurp_one_reloc_table): Do not update `reloc_count'.
(mips_elf64_slurp_reloc_table): Assert that `reloc_count' is
triple rather than once the sum of REL and RELA relocation entry
counts.
(bfd_elf64_get_reloc_upper_bound): Remove macro.
* elflink.c (_bfd_elf_link_read_relocs): Do not multiply
`reloc_count' by `int_rels_per_ext_rel' for internal relocation
storage allocation size determination.
(elf_link_input_bfd): Multiply `.ctors' and `.dtors' section's
size by `int_rels_per_ext_rel'. Do not multiply `reloc_count'
by `int_rels_per_ext_rel' for last relocation entry
determination.
(bfd_elf_final_link): Do not multiply `reloc_count' by
`int_rels_per_ext_rel' for internal relocation storage
allocation size determination.
(init_reloc_cookie_rels): Do not multiply `reloc_count' by
`int_rels_per_ext_rel' for last relocation entry determination.
(elf_gc_smash_unused_vtentry_relocs): Likewise.
* elfxx-mips.c (_bfd_mips_elf_check_relocs): Likewise.
(_bfd_mips_elf_relocate_section): Likewise.
This option switches on ld.bfd --enable-new-dtags by default.
* configure.ac: Add --enable-new-dtags option.
* ldmain.c: Set link_info.new_dtags to 1 if when --enable-new-dtags is
switched on.
* configure: Regenerate.
* config.in: Regenerate.
The parameter "first" of linux_nat_post_attach_wait is unused, remove
it.
gdb/ChangeLog:
* linux-nat.c (linux_nat_post_attach_wait): Remove FIRST
parameter.
(linux_nat_attach): Adjust call to linux_nat_post_attach_wait.
Since it is incorrect to convert
bnd call *foo@GOTPCREL(%rip)
to
bnd nop
call foo
this patch removes the "-z prefix-nop" option from x86 linker.
* emulparams/call_nop.sh: Remove -z prefix-nop.
* ld.texinfo: Likewise.
* testsuite/ld-i386/call3c.d: Check for linker error.
* testsuite/ld-x86-64/call1c.d: Likewise.
gdb_timer objects are new'ed in create_timer, but xfree'd in
poll_timers. Use delete instead.
gdb/ChangeLog:
* event-loop.c (poll_timers): Unallocate timer using delete
instead of xfree.
Breakpoints are currently in a limbo state between C and C++. There is
a pseudo class hierarchy implemented using struct fields. Taking
watchpoint as an example:
struct watchpoint
{
/* The base class. */
struct breakpoint base;
...
}
and it is instantianted with "new watchpoint ()". When destroyed, a
destructor is first invoked through the breakpoint_ops, and then the
memory is freed by calling delete through a pointer to breakpoint.
Address sanitizer complains about this, for example, because we new and
delete the same memory using different types.
This patch takes the logical step of making breakpoint subclasses extend
the breakpoint class for real, and converts their destructors to actual
C++ destructors.
Regtested on the buildbot.
gdb/ChangeLog:
* breakpoint.h (struct breakpoint_ops) <dtor>: Remove.
(struct breakpoint) <~breakpoint>: New.
(struct watchpoint): Inherit from breakpoint.
<~watchpoint>: New.
<base>: Remove.
(struct tracepoint): Inherit from breakpoint.
<base>: Remove.
* breakpoint.c (longjmp_breakpoint_ops): Remove.
(struct longjmp_breakpoint): Inherit from breakpoint.
<~longjmp_breakpoint>: New.
<base>: Remove.
(new_breakpoint_from_type): Remove casts.
(watchpoint_in_thread_scope): Remove reference to base field.
(watchpoint_del_at_next_stop): Likewise.
(update_watchpoint): Likewise.
(watchpoint_check): Likewise.
(bpstat_check_watchpoint): Likewise.
(set_longjmp_breakpoint): Likewise.
(struct fork_catchpoint): Inherit from breakpoint.
<base>: Remove.
(struct solib_catchpoint): Inherit from breakpoint.
<~solib_catchpoint>: New.
<base>: Remove.
(dtor_catch_solib): Change to ...
(solib_catchpoint::~solib_catchpoint): ... this.
(breakpoint_hit_catch_solib): Remove reference to base field.
(add_solib_catchpoint): Likewise.
(create_fork_vfork_event_catchpoint): Likewise.
(struct exec_catchpoint): Inherit from breakpoint.
<~exec_catchpoint>: New.
<base>: Remove.
(dtor_catch_exec): Change to ...
(exec_catchpoint::~exec_catchpoint): ... this.
(dtor_watchpoint): Change to ...
(watchpoint::~watchpoint): ... this.
(watch_command_1): Remove reference to base field.
(catch_exec_command_1): Likewise.
(base_breakpoint_dtor): Change to ...
(breakpoint::~breakpoint): ... this.
(base_breakpoint_ops): Remove dtor field value.
(longjmp_bkpt_dtor): Change to ...
(longjmp_breakpoint::~longjmp_breakpoint): ... this.
(strace_marker_create_breakpoints_sal): Remove reference to base
field.
(delete_breakpoint): Don't manually call breakpoint destructor.
(create_tracepoint_from_upload): Remove reference to base field.
(trace_pass_set_count): Likewise.
(initialize_breakpoint_ops): Don't initialize
momentary_breakpoint_ops, don't set dtors.
* ada-lang.c (struct ada_catchpoint): Inherit from breakpoint.
<~ada_catchpoint>: New.
<base>: Remove.
(create_excep_cond_exprs): Remove reference to base field.
(dtor_exception): Change to ...
(ada_catchpoint::~ada_catchpoint): ... this.
(dtor_catch_exception): Remove.
(dtor_catch_exception_unhandled): Remove.
(dtor_catch_assert): Remove.
(create_ada_exception_catchpoint): Remove reference to base
field.
(initialize_ada_catchpoint_ops): Don't set dtors.
* break-catch-sig.c (struct signal_catchpoint): Inherit from
breakpoint.
<~signal_catchpoint>: New.
<base>: Remove.
(signal_catchpoint_dtor): Change to ...
(signal_catchpoint::~signal_catchpoint): ... this.
(create_signal_catchpoint): Remove reference to base field.
(initialize_signal_catchpoint_ops): Don't set dtor.
* break-catch-syscall.c (struct syscall_catchpoint): Inherit
from breakpoint.
<~syscall_catchpoint>: New.
<base>: Remove.
(dtor_catch_syscall): Change to ...
(syscall_catchpoint::~syscall_catchpoint): ... this.
(create_syscall_event_catchpoint): Remove reference to base
field.
(initialize_syscall_catchpoint_ops): Don't set dtor.
* break-catch-throw.c (struct exception_catchpoint): Inherit
from breakpoint.
<~exception_catchpoint>: New.
<base>: Remove.
(dtor_exception_catchpoint): Change to ...
(exception_catchpoint::~exception_catchpoint): ... this.
(handle_gnu_v3_exceptions): Remove reference to base field.
(initialize_throw_catchpoint_ops): Don't set dtor.
* ctf.c (ctf_get_traceframe_address): Remove reference to base
field.
* remote.c (remote_get_tracepoint_status): Likewise.
* tracefile-tfile.c (tfile_get_traceframe_address): Likewise.
* tracefile.c (tracefile_fetch_registers): Likewise.
* tracepoint.c (actions_command): Likewise.
(validate_actionline): Likewise.
(tfind_1): Likewise.
(get_traceframe_location): Likewise.
(find_matching_tracepoint_location): Likewise.
(parse_tracepoint_status): Likewise.
* mi/mi-cmd-break.c (mi_cmd_break_passcount): Likewise.
The longjmp kind of breakpoint has a destructor, but doesn't have an
associated structure. The next patch converts breakpoint destructors from
breakpoint_ops::dtor to actual destructors, but to do that it is needed
for longjmp_breakpoint to have a structure that will contain such
destructor. This patch adds it.
According to initialize_breakpoint_ops, a longjmp breakpoint derives
from "momentary breakpoints", so eventually a momentary_breakpoint
struct/class should probably be created. It's not necessary for the
destructor though, so a structure type for this abstract kind of
breakpoint can be added when we fully convert breakpoint ops into
methods of the breakpoint type hierarchy.
It is now necessary to instantiate different kinds of breakpoint objects
in set_raw_breakpoint_without_location based on bptype (sometimes a
breakpoint, sometimes a longjmp_breakpoint), so it now uses
new_breakpoint_from_type to do that. I also changed set_raw_breakpoint
to use it, even though I don't think that it can ever receive a bptype
that actually requires it. However, I think it's good if all breakpoint
object instantion is done in a single place.
gdb/ChangeLog:
* breakpoint.c (struct longjmp_breakpoint): New struct.
(is_tracepoint_type): Change return type to bool.
(is_longjmp_type): New function.
(new_breakpoint_from_type): Handle longjmp kinds of breakpoints.
(set_raw_breakpoint_without_location): Use
new_breakpoint_from_type.
(set_raw_breakpoint): Likewise.
This is a small preparatory patch to factor out a snippet that appears
twice. More kinds of breakpoints will need to be created based on
bptype, so I think it's a good idea to centralize the instantiation of
breakpoint objects.
gdb/ChangeLog:
* breakpoint.c (new_breakpoint_from_type): New function.
(create_breakpoint_sal): Use new_breakpoint_from_type and
unique_ptr.
(create_breakpoint): Likewise.
FreeBSD ELF cores contain data structures with that have two different
layouts: one for ILP32 platforms and a second for LP64 platforms.
Previously, the code used 'bits_per_word' from 'arch_info', but this
field is not a reliable indicator of the format for FreeBSD MIPS cores
in particular.
I had originally posted this patch back in November because process
cores for FreeBSD MIPS contained an e_flags value of 0 in the header
which resulted in a bfd_arch which always had 'bits_per_word' set to
32. This permitted reading o32 cores, but not n64 cores. The feedback
I received then was to try to change n64 cores to use a different
default bfd_arch that had a 64-bit 'bits_per_word' when e_flags was zero.
I submitted a patch to that effect but it was never approved. Instead,
I changed FreeBSD's kernel and gcore commands to preserve the e_flags
field from an executable when generating process cores. With a proper
e_flags field in process cores, n64 cores now use a 64-bit bfd_arch and
now work fine. However, the change to include e_flags in the process
cores had the unintended side effect of breaking handling of o32
process cores. Specifically, FreeBSD MIPS builds o32 with a default
MIPS architecture of 'mips3', thus FreeBSD process cores with a non-zero
e_flags match the 'mips3' bfd_arch which has 64 'bits_per_word'.
From this, it seems that 'bits_per_word' for FreeBSD MIPS is not likely
to ever be completely correct. However, FreeBSD core dumps do
reliably set the ELF class to ELFCLASS32 for cores using ILP32 and
ELFCLASS64 for cores using LP64. As such, I think my original patch of
using the ELF class instead of 'bits_per_word' is probably the simplest
and most reliable approach for detecting the note structure layout.
bfd/ChangeLog:
* elf.c (elfcore_grok_freebsd_psinfo): Use ELF header class to
determine structure sizes.
(elfcore_grok_freebsd_prstatus): Likewise.
ELFv2 functions with localentry:0 are those with a single entry point,
ie. global entry == local entry, and that have no requirement on r2 or
r12, and guarantee r2 is unchanged on return. Such an external
function can be called via the PLT without saving r2 or restoring it
on return, avoiding a common load-hit-store for small functions. The
optimization is attractive. The TOC pointer load-hit-store is a major
reason why calls to small functions that need no register saves, or
with shrink-wrap, no register saves on a fast path, are slow on
powerpc64le.
To be safe, this optimization needs ld.so support to check that the
run-time matches link-time function implementation. If a function
in a shared library with st_other localentry non-zero is called
without saving and restoring r2, r2 will be trashed on return, leading
to segfaults. For that reason the optimization does not happen for
weak functions since a weak definition is a fairly solid hint that the
function will likely be overridden. I'm also not enabling the
optimization by default unless glibc-2.26 is detected, which should
have the ld.so checks implemented.
bfd/
* elf64-ppc.c (struct ppc_link_hash_table): Add has_plt_localentry0.
(ppc64_elf_merge_symbol_attribute): Merge localentry bits from
dynamic objects.
(is_elfv2_localentry0): New function.
(ppc64_elf_tls_setup): Default params->plt_localentry0.
(plt_stub_size): Adjust size for tls_get_addr_opt stub.
(build_tls_get_addr_stub): Use a simpler stub when r2 is not saved.
(ppc64_elf_size_stubs): Leave stub_type as ppc_stub_plt_call for
optimized localentry:0 stubs.
(ppc64_elf_build_stubs): Save r2 in ELFv2 __glink_PLTresolve.
(ppc64_elf_relocate_section): Leave nop unchanged for optimized
localentry:0 stubs.
(ppc64_elf_finish_dynamic_sections): Set PPC64_OPT_LOCALENTRY in
DT_PPC64_OPT.
* elf64-ppc.h (struct ppc64_elf_params): Add plt_localentry0.
include/
* elf/ppc64.h (PPC64_OPT_LOCALENTRY): Define.
ld/
* emultempl/ppc64elf.em (params): Init plt_localentry0 field.
(enum ppc64_opt): New, replacing OPTION_* defines. Add
OPTION_PLT_LOCALENTRY, and OPTION_NO_PLT_LOCALENTRY.
(PARSE_AND_LIST_*): Support --plt-localentry and --no-plt-localentry.
* testsuite/ld-powerpc/elfv2so.d: Update.
* testsuite/ld-powerpc/powerpc.exp (TLS opt 5): Use --no-plt-localentry.
* testsuite/ld-powerpc/tlsopt5.d: Update.
Later CPU generations added optional operands to the ipte/idte
instructions. I've added these with:
https://sourceware.org/ml/binutils/2017-05/msg00316.html ... but
supported the optional operands only with the specific hardware
levels. However, it is more useful to have the optional operands
already in the first versions. Of course they need to be zero there.
Regression-tested with on s390 and s390x. Committed to mainline.
Bye,
-Andreas-
opcodes/ChangeLog:
2017-06-01 Andreas Krebbel <krebbel@linux.vnet.ibm.com>
* s390-opc.txt: Support the optional parameters with the first
versions of ipte/idte.
gas/ChangeLog:
2017-06-01 Andreas Krebbel <krebbel@linux.vnet.ibm.com>
* testsuite/gas/s390/esa-g5.d: Add ipte tests.
* testsuite/gas/s390/esa-g5.s: Likewise.
* testsuite/gas/s390/zarch-z196.d: Remove ipte tests.
* testsuite/gas/s390/zarch-z196.s: Likewise.
* testsuite/gas/s390/zarch-z990.d: Add idte tests.
* testsuite/gas/s390/zarch-z990.s: Likewise.
* testsuite/gas/s390/zarch-zEC12.d: Remove ipte/idte tests.
* testsuite/gas/s390/zarch-zEC12.s: Likewise.
Rename "mem" related commands, so that their naming is consistent with
the <command-name>_command pattern of naming functions that implement
commands.
gdb/ChangeLog:
* memattr.c (mem_info_command): Rename to ...
(info_mem_command): ... this.
(mem_enable_command): Rename to ...
(enable_mem_command): ... this.
(mem_disable_command): Rename to ...
(disable_mem_command): ... this.
(mem_delete_command): Rename to ...
(delete_mem_command): ... this.
(_initialize_mem): Adjust function names.
Newer versions of libipt support instruction flow decoder events instead of
indicating those events with flags in struct pt_insn. Add support for them in
GDB.
gdb/
* btrace.c (handle_pt_insn_events): New.
(ftrace_add_pt): Call handle_pt_insn_events. Rename ERRCODE into
STATUS. Split into this and ...
(handle_pt_insn_event_flags): ... this.
Version 2 of libipt adds an event system to instruction flow decoders and
deprecates indicating events via flags in struct pt_insn. Add configuration
checks to determine which version we have.
gdb/
* configure.ac: Check for pt_insn_event, struct pt_insn.enabled,
and struct pt_insn.resynced.
* configure: Regenerated.
* config.in: Regenerated.
libiberty/ChangeLog:
2017-05-31 Eli Zaretskii <eliz@gnu.org>
* waitpid.c (wait) [__MINGW32__]: Define as a macro
that calls _cwait, so that this function works on MinGW.
Currently print_insn_arc relies on BFD mach and ELF private headers to
distinguish between various ARC architectures. Sometimes those values are not
correct or available, mainly in the case of debugging targets without and ELF
file available. Changing a BFD mach is not a problem for the debugger, because
this is a generic BFD field, and GDB, for example, already sets it according to
information provided in XML target description or specified via GDB 'set arch'
command. However, things are more complicated for ELF private headers, since
it requires existing of an actual ELF file. To workaround this problem this
patch allows CPU model to be specified via disassemble info options. If CPU is
specified in options, then it will take a higher precedence than whatever might
be specified in ELF file.
This is mostly needed for ARC EM and ARC HS, because they have the same
"architecture" (mach) ARCv2 and differ in their private ELF headers. Other ARC
architectures can be distinguished between each other purely via "mach" field.
Proposed disassemble option format is "cpu=<CPU>", where CPU can be any valid
ARC CPU name as supported by GAS. Note that this creates a seeming redundancy
with objdump -m/--architecture option, however -mEM and -mHS still result in
"ARCv2" architecture internally, while -Mcpu={HS,EM} would have an actual
effect on disassembler.
opcodes/ChangeLog:
yyyy-mm-dd Anton Kolesov <anton.kolesov@synopsys.com>
* arc-dis.c (enforced_isa_mask): Declare.
(cpu_types): Likewise.
(parse_cpu_option): New function.
(parse_disassembler_options): Use it.
(print_insn_arc): Use enforced_isa_mask.
(print_arc_disassembler_options): Document new options.
binutils/ChangeLog:
yyyy-mm-dd Anton Kolesov <anton.kolesov@synopsys.com>
* doc/binutils.texi: Document new cpu=... disassembler options for ARC.
This patch extracts ARC CPU definitions from gas/config/tc-arc.c (cpu_types)
into a separate file arc-cpu.def. This will allow reuse of CPU type definition
in multiple places where it might be needed, for example in disassembler. This
will help ensure that gas and disassembker use same option values for CPUs.
arc-cpu.def file relies on preprocessor macroses which are defined somewhere
else. This for example multiple C files to include arc-cpu.def, but define
different macroses, therefore creating different structures.
include/ChangeLog:
yyyy-mm-dd Anton Kolesov <anton.kolesov@synopsys.com>
* elf/arc-cpu.def: New file.
gas/ChangeLog:
yyyy-mm-dd Anton Kolesov <anton.kolesov@synopsys.com>
* config/tc-arc.c (cpu_types): Include arc-cpu.def
Signed-off-by: Anton Kolesov <Anton.Kolesov@synopsys.com>
The general rule for bfd_arch_info_type->compatible (A, B) is that if A and B
are compatible, then this function should return architecture that is more
"feature-rich", that is, can run both A and B. ARCv2, EM and HS all has same
mach number, so bfd_default_compatible assumes they are the same, and returns
an A. That causes issues with GDB, because GDB assumes that if machines are
compatible, then "compatible ()" always returns same machine regardless of
argument order. As a result GDB gets confused because, for example,
compatible(ARCv2, EM) returns ARCv2, but compatible(EM, ARCv2) returns EM,
hence GDB is not sure if they are compatible and prints a warning.
bfd/ChangeLog:
yyyy-mm-dd Anton Kolesov Anton.Kolesov@synopsys.com
cpu-arc.c (arc_compatible): New function.