mt_allocator.h: Removed the last pointer.

2004-02-17  Stefan Olsson  <stefan@xapa.se>

	* include/ext/mt_allocator.h: Removed the last
	pointer. Deallocated blocks are now added to the front of
	freelists as proposed by Felix Yen.  This gives roughly 10%
	performance boost and saves some memory.
	* docs/html/ext/mt_allocator.html: Change due to that deallocated
	blocks now are added to the front of freelists. The reason to this
	approach is also explained.

From-SVN: r78009
This commit is contained in:
Stefan Olsson 2004-02-18 02:21:10 +01:00 committed by Benjamin Kosnik
parent f8b58e56c5
commit 4d0bdcd6e1
3 changed files with 69 additions and 53 deletions

View File

@ -1,3 +1,13 @@
2004-02-17 Stefan Olsson <stefan@xapa.se>
* include/ext/mt_allocator.h: Removed the last
pointer. Deallocated blocks are now added to the front of
freelists as proposed by Felix Yen. This gives roughly 10%
performance boost and saves some memory.
* docs/html/ext/mt_allocator.html: Change due to that deallocated
blocks now are added to the front of freelists. The reason to this
approach is also explained.
2004-02-17 Paolo Carlini <pcarlini@suse.de>
* include/bits/locale_facets.tcc (num_get<>::_M_extract_float,

View File

@ -107,8 +107,6 @@ The _S_init() function:
This holds the pointer to the first free block for each thread in this
bin. I.e., if we would like to know where the first free block of size 32
for thread number 3 is we would look this up by: _S_bin[ 5 ].first[ 3 ]
- bin_record->last = See above, the only difference being that this points
to the last record on the same freelist.
The above created block_record pointers members are now initialized to
their initial values. I.e. _S_bin[ n ].first[ n ] = NULL;
@ -196,9 +194,9 @@ This is the first two blocks in freelist for thread id 3 in bin 3 (8 bytes):
| | |
+----------------+ |
+----------------+ |
| next* |<-+ (If next == NULL it's the last one on the list and
| | then the _S_bin[ 3 ].last[ 3 ] pointer points to
| | here as well)
| next* |<-+ (If next == NULL it's the last one on the list)
| |
| |
| |
+----------------+
| thread_id = 3 |
@ -242,16 +240,21 @@ If the freelist is empty (the pointer is NULL) we must get memory from the
system and build us a freelist within this memory. All requests for new memory
is made in chunks of _S_chunk_size. Knowing the size of a block_record and
the bytes that this bin stores we then calculate how many blocks we can create
within this chunk, build the list, remove the first block, update the pointers
(_S_bin[ bin ].first[ 0 ] and _S_bin[ bin ].last[ 0 ]) and return a pointer
to that blocks data.
within this chunk, build the list, remove the first block, update the pointer
(_S_bin[ bin ].first[ 0 ]) and return a pointer to that blocks data.
</p>
<p>
Deallocation is equally simple; the pointer is casted back to a block_record
pointer, lookup which bin to use based on the size, add the block to the end
of the global freelist (with the next pointer set to NULL) and update the
pointers as needed (_S_bin[ bin ].first[ 0 ] and _S_bin[ bin ].last[ 0 ]).
pointer, lookup which bin to use based on the size, add the block to the front
of the global freelist and update the pointer as needed
(_S_bin[ bin ].first[ 0 ]).
</p>
<p>
The decision to add deallocated blocks to the front of the freelist was made
after a set of performance measurements that showed that this is roughly 10%
faster than maintaining a set of "last pointers" as well.
</p>
<h3 class="left">
@ -350,7 +353,7 @@ current threads freelist instead of to the global.
<p>
The basic process of a deallocation call is simple: always add the
block to the end of the current threads freelist and update the
block to the front of the current threads freelist and update the
counters and pointers (as described earlier with the specific check of
ownership that causes the used counter of the thread that originally
allocated the block to be decreased instead of the current threads

View File

@ -197,12 +197,11 @@ namespace __gnu_cxx
struct bin_record
{
/*
* An "array" of pointers to the first/last free block for each
* thread id. Memory to these "arrays" is allocated in _S_init()
* An "array" of pointers to the first free block for each
* thread id. Memory to this "array" is allocated in _S_init()
* for _S_max_threads + global pool 0.
*/
block_record** volatile first;
block_record** volatile last;
/*
* An "array" of counters used to keep track of the amount of blocks
@ -336,35 +335,40 @@ namespace __gnu_cxx
block->next = NULL;
block->thread_id = thread_id;
_S_bin[bin].last[thread_id] = block;
}
else
{
size_t global_count = 0;
block_record* tmp;
while( _S_bin[bin].first[0] != NULL &&
global_count < block_count )
{
tmp = _S_bin[bin].first[0]->next;
block = _S_bin[bin].first[0];
if (_S_bin[bin].first[thread_id] == NULL)
_S_bin[bin].first[thread_id] = block;
{
_S_bin[bin].first[thread_id] = block;
block->next = NULL;
}
else
_S_bin[bin].last[thread_id]->next = block;
_S_bin[bin].last[thread_id] = block;
{
block->next = _S_bin[bin].first[thread_id];
_S_bin[bin].first[thread_id] = block;
}
block->thread_id = thread_id;
_S_bin[bin].free[thread_id]++;
_S_bin[bin].first[0] = _S_bin[bin].first[0]->next;
_S_bin[bin].first[0] = tmp;
global_count++;
}
block->next = NULL;
__gthread_mutex_unlock(_S_bin[bin].mutex);
}
@ -404,7 +408,6 @@ namespace __gnu_cxx
}
block->next = NULL;
_S_bin[bin].last[0] = block;
block = _S_bin[bin].first[0];
@ -464,12 +467,6 @@ namespace __gnu_cxx
block_record* block = (block_record*)((char*)__p
- sizeof(block_record));
/*
* This block will always be at the back of a list and thus
* we set its next pointer to NULL.
*/
block->next = NULL;
#ifdef __GTHREADS
if (__gthread_active_p())
{
@ -491,25 +488,30 @@ namespace __gnu_cxx
{
__gthread_mutex_lock(_S_bin[bin].mutex);
block_record* tmp;
while (remove > 0)
{
tmp = _S_bin[bin].first[thread_id]->next;
if (_S_bin[bin].first[0] == NULL)
_S_bin[bin].first[0] = _S_bin[bin].first[thread_id];
{
_S_bin[bin].first[0] = _S_bin[bin].first[thread_id];
_S_bin[bin].first[0]->next = NULL;
}
else
_S_bin[bin].last[0]->next = _S_bin[bin].first[thread_id];
{
_S_bin[bin].first[thread_id]->next = _S_bin[bin].first[0];
_S_bin[bin].first[0] = _S_bin[bin].first[thread_id];
}
_S_bin[bin].last[0] = _S_bin[bin].first[thread_id];
_S_bin[bin].first[thread_id] =
_S_bin[bin].first[thread_id]->next;
_S_bin[bin].first[thread_id] = tmp;
_S_bin[bin].free[thread_id]--;
remove--;
}
_S_bin[bin].last[0]->next = NULL;
__gthread_mutex_unlock(_S_bin[bin].mutex);
}
@ -518,11 +520,15 @@ namespace __gnu_cxx
* counters and owner id as needed
*/
if (_S_bin[bin].first[thread_id] == NULL)
_S_bin[bin].first[thread_id] = block;
{
_S_bin[bin].first[thread_id] = block;
block->next = NULL;
}
else
_S_bin[bin].last[thread_id]->next = block;
_S_bin[bin].last[thread_id] = block;
{
block->next = _S_bin[bin].first[thread_id];
_S_bin[bin].first[thread_id] = block;
}
_S_bin[bin].free[thread_id]++;
@ -541,11 +547,15 @@ namespace __gnu_cxx
* Single threaded application - return to global pool
*/
if (_S_bin[bin].first[0] == NULL)
_S_bin[bin].first[0] = block;
{
_S_bin[bin].first[0] = block;
block->next = NULL;
}
else
_S_bin[bin].last[0]->next = block;
_S_bin[bin].last[0] = block;
{
block->next = _S_bin[bin].first[0];
_S_bin[bin].first[0] = block;
}
}
}
};
@ -669,12 +679,6 @@ namespace __gnu_cxx
if (!_S_bin[bin].first)
std::__throw_bad_alloc();
_S_bin[bin].last = static_cast<block_record**>(::operator
new(sizeof(block_record*) * __n));
if (!_S_bin[bin].last)
std::__throw_bad_alloc();
#ifdef __GTHREADS
if (__gthread_active_p())
{
@ -708,7 +712,6 @@ namespace __gnu_cxx
for (size_t thread = 0; thread < __n; thread++)
{
_S_bin[bin].first[thread] = NULL;
_S_bin[bin].last[thread] = NULL;
#ifdef __GTHREADS
if (__gthread_active_p())
{