.gdb_index prod perf regression: find before insert in unordered_map

"perf" shows the unordered_map::emplace call in write_hash_table a bit
high up on profiles.  Fix this using the find + insert idiom instead
of going straight to insert.

I tried doing the same to the other unordered_maps::emplace calls in
the file, but saw no performance improvement, so left them be.

With a '-g3 -O2' build of gdb, and:

  $ cat save-index.cmd
  set $i = 0
  while $i < 100
    save gdb-index .
    set $i = $i + 1
  end
  $ time ./gdb -data-directory=data-directory -nx --batch -q -x save-index.cmd  ./gdb.pristine

I get an improvement of ~7%:

  ~7.0s => ~6.5s (average of 5 runs).

gdb/ChangeLog:
2017-06-12  Pedro Alves  <palves@redhat.com>

	* dwarf2read.c (write_hash_table): Check if key already exists
	before emplacing.
This commit is contained in:
Pedro Alves 2017-06-12 00:49:51 +01:00
parent c2f134ac41
commit 70a1152bee
2 changed files with 21 additions and 5 deletions

View File

@ -1,3 +1,8 @@
2017-06-12 Pedro Alves <palves@redhat.com>
* dwarf2read.c (write_hash_table): Check if key already exists
before emplacing.
2017-06-12 Pedro Alves <palves@redhat.com>
* dwarf2read.c (data_buf::append_space): Rename to...

View File

@ -23430,11 +23430,22 @@ write_hash_table (mapped_symtab *symtab, data_buf &output, data_buf &cpool)
if (it == NULL)
continue;
gdb_assert (it->index_offset == 0);
const auto insertpair
= symbol_hash_table.emplace (it->cu_indices, cpool.size ());
it->index_offset = insertpair.first->second;
if (!insertpair.second)
continue;
/* Finding before inserting is faster than always trying to
insert, because inserting always allocates a node, does the
lookup, and then destroys the new node if another node
already had the same key. C++17 try_emplace will avoid
this. */
const auto found
= symbol_hash_table.find (it->cu_indices);
if (found != symbol_hash_table.end ())
{
it->index_offset = found->second;
continue;
}
symbol_hash_table.emplace (it->cu_indices, cpool.size ());
it->index_offset = cpool.size ();
cpool.append_data (MAYBE_SWAP (it->cu_indices.size ()));
for (const auto iter : it->cu_indices)
cpool.append_data (MAYBE_SWAP (iter));