Policy-Based Data Structures ISO C++ policy container data structure associated tree trie hash metaprogramming
Intro This is a library of policy-based elementary data structures: associative containers and priority queues. It is designed for high-performance, flexibility, semantic safety, and conformance to the corresponding containers in std and std::tr1 (except for some points where it differs by design).
Performance Issues An attempt is made to categorize the wide variety of possible container designs in terms of performance-impacting factors. These performance factors are translated into design policies and incorporated into container design. There is tension between unravelling factors into a coherent set of policies. Every attempt is made to make a minimal set of factors. However, in many cases multiple factors make for long template names. Every attempt is made to alias and use typedefs in the source files, but the generated names for external symbols can be large for binary files or debuggers. In many cases, the longer names allow capabilities and behaviours controlled by macros to also be unamibiguously emitted as distinct generated names. Specific issues found while unraveling performance factors in the design of associative containers and priority queues follow.
Associative Associative containers depend on their composite policies to a very large extent. Implicitly hard-wiring policies can hamper their performance and limit their functionality. An efficient hash-based container, for example, requires policies for testing key equivalence, hashing keys, translating hash values into positions within the hash table, and determining when and how to resize the table internally. A tree-based container can efficiently support order statistics, i.e. the ability to query what is the order of each key within the sequence of keys in the container, but only if the container is supplied with a policy to internally update meta-data. There are many other such examples. Ideally, all associative containers would share the same interface. Unfortunately, underlying data structures and mapping semantics differentiate between different containers. For example, suppose one writes a generic function manipulating an associative container. template<typename Cntnr> void some_op_sequence(Cntnr& r_cnt) { ... } Given this, then what can one assume about the instantiating container? The answer varies according to its underlying data structure. If the underlying data structure of Cntnr is based on a tree or trie, then the order of elements is well defined; otherwise, it is not, in general. If the underlying data structure of Cntnr is based on a collision-chaining hash table, then modifying r_Cntnr will not invalidate its iterators' order; if the underlying data structure is a probing hash table, then this is not the case. If the underlying data structure is based on a tree or trie, then a reference to the container can efficiently be split; otherwise, it cannot, in general. If the underlying data structure is a red-black tree, then splitting a reference to the container is exception-free; if it is an ordered-vector tree, exceptions can be thrown.
Priority Que Priority queues are useful when one needs to efficiently access a minimum (or maximum) value as the set of values changes. Most useful data structures for priority queues have a relatively simple structure, as they are geared toward relatively simple requirements. Unfortunately, these structures do not support access to an arbitrary value, which turns out to be necessary in many algorithms. Say, decreasing an arbitrary value in a graph algorithm. Therefore, some extra mechanism is necessary and must be invented for accessing arbitrary values. There are at least two alternatives: embedding an associative container in a priority queue, or allowing cross-referencing through iterators. The first solution adds significant overhead; the second solution requires a precise definition of iterator invalidation. Which is the next point... Priority queues, like hash-based containers, store values in an order that is meaningless and undefined externally. For example, a push operation can internally reorganize the values. Because of this characteristic, describing a priority queues' iterator is difficult: on one hand, the values to which iterators point can remain valid, but on the other, the logical order of iterators can change unpredictably. Roughly speaking, any element that is both inserted to a priority queue (e.g. through push) and removed from it (e.g., through pop), incurs a logarithmic overhead (in the amortized sense). Different underlying data structures place the actual cost differently: some are optimized for amortized complexity, whereas others guarantee that specific operations only have a constant cost. One underlying data structure might be chosen if modifying a value is frequent (Dijkstra's shortest-path algorithm), whereas a different one might be chosen otherwise. Unfortunately, an array-based binary heap - an underlying data structure that optimizes (in the amortized sense) push and pop operations, differs from the others in terms of its invalidation guarantees. Other design decisions also impact the cost and placement of the overhead, at the expense of more difference in the the kinds of operations that the underlying data structure can support. These differences pose a challenge when creating a uniform interface for priority queues.
Goals Many fine associative-container libraries were already written, most notably, the C++ standard's associative containers. Why then write another library? This section shows some possible advantages of this library, when considering the challenges in the introduction. Many of these points stem from the fact that the ISO C++ process introduced associative-containers in a two-step process (first standardizing tree-based containers, only then adding hash-based containers, which are fundamentally different), did not standardize priority queues as containers, and (in our opinion) overloads the iterator concept.
Associative
Policy Choices Associative containers require a relatively large number of policies to function efficiently in various settings. In some cases this is needed for making their common operations more efficient, and in other cases this allows them to support a larger set of operations Hash-based containers, for example, support look-up and insertion methods (find and insert). In order to locate elements quickly, they are supplied a hash functor, which instruct how to transform a key object into some size type; a hash functor might transform "hello" into 1123002298. A hash table, though, requires transforming each key object into some size-type type in some specific domain; a hash table with a 128-long table might transform "hello" into position 63. The policy by which the hash value is transformed into a position within the table can dramatically affect performance. Hash-based containers also do not resize naturally (as opposed to tree-based containers, for example). The appropriate resize policy is unfortunately intertwined with the policy that transforms hash value into a position within the table. Tree-based containers, for example, also support look-up and insertion methods, and are primarily useful when maintaining order between elements is important. In some cases, though, one can utilize their balancing algorithms for completely different purposes. Figure A shows a tree whose each node contains two entries: a floating-point key, and some size-type metadata (in bold beneath it) that is the number of nodes in the sub-tree. (The root has key 0.99, and has 5 nodes (including itself) in its sub-tree.) A container based on this data structure can obviously answer efficiently whether 0.3 is in the container object, but it can also answer what is the order of 0.3 among all those in the container object: see . As another example, Figure B shows a tree whose each node contains two entries: a half-open geometric line interval, and a number metadata (in bold beneath it) that is the largest endpoint of all intervals in its sub-tree. (The root describes the interval [20, 36), and the largest endpoint in its sub-tree is 99.) A container based on this data structure can obviously answer efficiently whether [3, 41) is in the container object, but it can also answer efficiently whether the container object has intervals that intersect [3, 41). These types of queries are very useful in geometric algorithms and lease-management algorithms. It is important to note, however, that as the trees are modified, their internal structure changes. To maintain these invariants, one must supply some policy that is aware of these changes. Without this, it would be better to use a linked list (in itself very efficient for these purposes).
Node Invariants Node Invariants
Underlying Data Structures The standard C++ library contains associative containers based on red-black trees and collision-chaining hash tables. These are very useful, but they are not ideal for all types of settings. The figure below shows the different underlying data structures currently supported in this library.
Underlying Associative Data Structures Underlying Associative Data Structures
A shows a collision-chaining hash-table, B shows a probing hash-table, C shows a red-black tree, D shows a splay tree, E shows a tree based on an ordered vector(implicit in the order of the elements), F shows a PATRICIA trie, and G shows a list-based container with update policies. Each of these data structures has some performance benefits, in terms of speed, size or both. For now, note that vector-based trees and probing hash tables manipulate memory more efficiently than red-black trees and collision-chaining hash tables, and that list-based associative containers are very useful for constructing "multimaps". Now consider a function manipulating a generic associative container, template<class Cntnr> int some_op_sequence(Cntnr &r_cnt) { ... } Ideally, the underlying data structure of Cntnr would not affect what can be done with r_cnt. Unfortunately, this is not the case. For example, if Cntnr is std::map, then the function can use std::for_each(r_cnt.find(foo), r_cnt.find(bar), foobar) in order to apply foobar to all elements between foo and bar. If Cntnr is a hash-based container, then this call's results are undefined. Also, if Cntnr is tree-based, the type and object of the comparison functor can be accessed. If Cntnr is hash based, these queries are nonsensical. There are various other differences based on the container's underlying data structure. For one, they can be constructed by, and queried for, different policies. Furthermore: Containers based on C, D, E and F store elements in a meaningful order; the others store elements in a meaningless (and probably time-varying) order. By implication, only containers based on C, D, E and F can support erase operations taking an iterator and returning an iterator to the following element without performance loss. Containers based on C, D, E, and F can be split and joined efficiently, while the others cannot. Containers based on C and D, furthermore, can guarantee that this is exception-free; containers based on E cannot guarantee this. Containers based on all but E can guarantee that erasing an element is exception free; containers based on E cannot guarantee this. Containers based on all but B and E can guarantee that modifying an object of their type does not invalidate iterators or references to their elements, while containers based on B and E cannot. Containers based on C, D, and E can furthermore make a stronger guarantee, namely that modifying an object of their type does not affect the order of iterators. A unified tag and traits system (as used for the C++ standard library iterators, for example) can ease generic manipulation of associative containers based on different underlying data structures.
Iterators Iterators are centric to the design of the standard library containers, because of the container/algorithm/iterator decomposition that allows an algorithm to operate on a range through iterators of some sequence. Iterators, then, are useful because they allow going over a specific sequence. The standard library also uses iterators for accessing a specific element: when an associative container returns one through find. The standard library consistently uses the same types of iterators for both purposes: going over a range, and accessing a specific found element. Before the introduction of hash-based containers to the standard library, this made sense (with the exception of priority queues, which are discussed later). Using the standard associative containers together with non-order-preserving associative containers (and also because of priority-queues container), there is a possible need for different types of iterators for self-organizing containers: the iterator concept seems overloaded to mean two different things (in some cases). XXX "ds_gen.html#find_range">Design::Associative Containers::Data-Structure Genericity::Point-Type and Range-Type Methods.
Using Point Iterators for Range Operations Suppose cntnr is some associative container, and say c is an object of type cntnr. Then what will be the outcome of std::for_each(c.find(1), c.find(5), foo); If cntnr is a tree-based container object, then an in-order walk will apply foo to the relevant elements, as in the graphic below, label A. If c is a hash-based container, then the order of elements between any two elements is undefined (and probably time-varying); there is no guarantee that the elements traversed will coincide with the logical elements between 1 and 5, as in label B.
Range Iteration in Different Data Structures Node Invariants
In our opinion, this problem is not caused just because red-black trees are order preserving while collision-chaining hash tables are (generally) not - it is more fundamental. Most of the standard's containers order sequences in a well-defined manner that is determined by their interface: calling insert on a tree-based container modifies its sequence in a predictable way, as does calling push_back on a list or a vector. Conversely, collision-chaining hash tables, probing hash tables, priority queues, and list-based containers (which are very useful for "multimaps") are self-organizing data structures; the effect of each operation modifies their sequences in a manner that is (practically) determined by their implementation. Consequently, applying an algorithm to a sequence obtained from most containers may or may not make sense, but applying it to a sub-sequence of a self-organizing container does not.
Cost to Point Iterators to Enable Range Operations Suppose c is some collision-chaining hash-based container object, and one calls c.find(3) Then what composes the returned iterator? In the graphic below, label A shows the simplest (and most efficient) implementation of a collision-chaining hash table. The little box marked point_iterator shows an object that contains a pointer to the element's node. Note that this "iterator" has no way to move to the next element ( it cannot support operator++). Conversely, the little box marked iterator stores both a pointer to the element, as well as some other information (the bucket number of the element). the second iterator, then, is "heavier" than the first one- it requires more time and space. If we were to use a different container to cross-reference into this hash-table using these iterators - it would take much more space. As noted above, nothing much can be done by incrementing these iterators, so why is this extra information needed? Alternatively, one might create a collision-chaining hash-table where the lists might be linked, forming a monolithic total-element list, as in the graphic below, label B. Here the iterators are as light as can be, but the hash-table's operations are more complicated.
Point Iteration in Hash Data Structures Point Iteration in Hash Data Structures
It should be noted that containers based on collision-chaining hash-tables are not the only ones with this type of behavior; many other self-organizing data structures display it as well.
Invalidation Guarantees Consider the following snippet: it = c.find(3); c.erase(5); Following the call to erase, what is the validity of it: can it be de-referenced? can it be incremented? The answer depends on the underlying data structure of the container. The graphic below shows three cases: A1 and A2 show a red-black tree; B1 and B2 show a probing hash-table; C1 and C2 show a collision-chaining hash table.
Effect of erase in different underlying data structures Effect of erase in different underlying data structures
Erasing 5 from A1 yields A2. Clearly, an iterator to 3 can be de-referenced and incremented. The sequence of iterators changed, but in a way that is well-defined by the interface. Erasing 5 from B1 yields B2. Clearly, an iterator to 3 is not valid at all - it cannot be de-referenced or incremented; the order of iterators changed in a way that is (practically) determined by the implementation and not by the interface. Erasing 5 from C1 yields C2. Here the situation is more complicated. On the one hand, there is no problem in de-referencing it. On the other hand, the order of iterators changed in a way that is (practically) determined by the implementation and not by the interface. So in the standard library containers, it is not always possible to express whether it is valid or not. This is true also for insert. Again, the iterator concept seems overloaded.
Functional The design of the functional overlay to the underlying data structures differs slightly from some of the conventions used in the C++ standard. A strict public interface of methods that comprise only operations which depend on the class's internal structure; other operations are best designed as external functions. (See ).With this rubric, the standard associative containers lack some useful methods, and provide other methods which would be better removed.
<function>erase</function> Order-preserving standard associative containers provide the method iterator erase(iterator it) which takes an iterator, erases the corresponding element, and returns an iterator to the following element. Also standardd hash-based associative containers provide this method. This seemingly increasesgenericity between associative containers, since it is possible to use typename C::iterator it = c.begin(); typename C::iterator e_it = c.end(); while(it != e_it) it = pred(*it)? c.erase(it) : ++it; in order to erase from a container object c all element which match a predicate pred. However, in a different sense this actually decreases genericity: an integral implication of this method is that tree-based associative containers' memory use is linear in the total number of elements they store, while hash-based containers' memory use is unbounded in the total number of elements they store. Assume a hash-based container is allowed to decrease its size when an element is erased. Then the elements might be rehashed, which means that there is no "next" element - it is simply undefined. Consequently, it is possible to infer from the fact that the standard library's hash-based containers provide this method that they cannot downsize when elements are erased. As a consequence, different code is needed to manipulate different containers, assuming that memory should be conserved. Therefor, this library's non-order preserving associative containers omit this method. All associative containers include a conditional-erase method template< class Pred> size_type erase_if (Pred pred) which erases all elements matching a predicate. This is probably the only way to ensure linear-time multiple-item erase which can actually downsize a container. The standard associative containers provide methods for multiple-item erase of the form size_type erase(It b, It e) erasing a range of elements given by a pair of iterators. For tree-based or trie-based containers, this can implemented more efficiently as a (small) sequence of split and join operations. For other, unordered, containers, this method isn't much better than an external loop. Moreover, if c is a hash-based container, then c.erase(c.find(2), c.find(5)) is almost certain to do something different than erasing all elements whose keys are between 2 and 5, and is likely to produce other undefined behavior.
<function>split</function> and <function>join</function> It is well-known that tree-based and trie-based container objects can be efficiently split or joined (See ). Externally splitting or joining trees is super-linear, and, furthermore, can throw exceptions. Split and join methods, consequently, seem good choices for tree-based container methods, especially, since as noted just before, they are efficient replacements for erasing sub-sequences.
<function>insert</function> The standard associative containers provide methods of the form template<class It> size_type insert(It b, It e); for inserting a range of elements given by a pair of iterators. At best, this can be implemented as an external loop, or, even more efficiently, as a join operation (for the case of tree-based or trie-based containers). Moreover, these methods seem similar to constructors taking a range given by a pair of iterators; the constructors, however, are transactional, whereas the insert methods are not; this is possibly confusing.
<function>operator==</function> and <function>operator<=</function> Associative containers are parametrized by policies allowing to test key equivalence: a hash-based container can do this through its equivalence functor, and a tree-based container can do this through its comparison functor. In addition, some standard associative containers have global function operators, like operator== and operator<=, that allow comparing entire associative containers. In our opinion, these functions are better left out. To begin with, they do not significantly improve over an external loop. More importantly, however, they are possibly misleading - operator==, for example, usually checks for equivalence, or interchangeability, but the associative container cannot check for values' equivalence, only keys' equivalence; also, are two containers considered equivalent if they store the same values in different order? this is an arbitrary decision.
Priority Queues
Policy Choices Priority queues are containers that allow efficiently inserting values and accessing the maximal value (in the sense of the container's comparison functor). Their interface supports push and pop. The standard container std::priorityqueue indeed support these methods, but little else. For algorithmic and software-engineering purposes, other methods are needed: Many graph algorithms (see ) require increasing a value in a priority queue (again, in the sense of the container's comparison functor), or joining two priority-queue objects. The return type of priority_queue's push method is a point-type iterator, which can be used for modifying or erasing arbitrary values. For example: priority_queue<int> p; priority_queue<int>::point_iterator it = p.push(3); p.modify(it, 4); These types of cross-referencing operations are necessary for making priority queues useful for different applications, especially graph applications. It is sometimes necessary to erase an arbitrary value in a priority queue. For example, consider the select function for monitoring file descriptors: int select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *errorfds, struct timeval *timeout); then, as the select documentation states: The nfds argument specifies the range of file descriptors to be tested. The select() function tests file descriptors in the range of 0 to nfds-1. It stands to reason, therefore, that we might wish to maintain a minimal value for nfds, and priority queues immediately come to mind. Note, though, that when a socket is closed, the minimal file description might change; in the absence of an efficient means to erase an arbitrary value from a priority queue, we might as well avoid its use altogether. The standard containers typically support iterators. It is somewhat unusual for std::priority_queue to omit them (See ). One might ask why do priority queues need to support iterators, since they are self-organizing containers with a different purpose than abstracting sequences. There are several reasons: Iterators (even in self-organizing containers) are useful for many purposes: cross-referencing containers, serialization, and debugging code that uses these containers. The standard library's hash-based containers support iterators, even though they too are self-organizing containers with a different purpose than abstracting sequences. In standard-library-like containers, it is natural to specify the interface of operations for modifying a value or erasing a value (discussed previously) in terms of a iterators. It should be noted that the standard containers also use iterators for accessing and manipulating a specific value. In hash-based containers, one checks the existence of a key by comparing the iterator returned by find to the iterator returned by end, and not by comparing a pointer returned by find to NULL.
Underlying Data Structures There are three main implementations of priority queues: the first employs a binary heap, typically one which uses a sequence; the second uses a tree (or forest of trees), which is typically less structured than an associative container's tree; the third simply uses an associative container. These are shown in the figure below with labels A1 and A2, B, and C.
Underlying Priority Queue Data Structures Underlying Priority Queue Data Structures
No single implementation can completely replace any of the others. Some have better push and pop amortized performance, some have better bounded (worst case) response time than others, some optimize a single method at the expense of others, etc. In general the "best" implementation is dictated by the specific problem. As with associative containers, the more implementations co-exist, the more necessary a traits mechanism is for handling generic containers safely and efficiently. This is especially important for priority queues, since the invalidation guarantees of one of the most useful data structures - binary heaps - is markedly different than those of most of the others.
Binary Heaps Binary heaps are one of the most useful underlying data structures for priority queues. They are very efficient in terms of memory (since they don't require per-value structure metadata), and have the best amortized push and pop performance for primitive types like int. The standard library's priority_queue implements this data structure as an adapter over a sequence, typically std::vector or std::deque, which correspond to labels A1 and A2 respectively in the graphic above. This is indeed an elegant example of the adapter concept and the algorithm/container/iterator decomposition. (See ). There are several reasons why a binary-heap priority queue may be better implemented as a container instead of a sequence adapter: std::priority_queue cannot erase values from its adapted sequence (irrespective of the sequence type). This means that the memory use of an std::priority_queue object is always proportional to the maximal number of values it ever contained, and not to the number of values that it currently contains. (See performance/priority_queue_text_pop_mem_usage.cc.) This implementation of binary heaps acts very differently than other underlying data structures (See also pairing heaps). Some combinations of adapted sequences and value types are very inefficient or just don't make sense. If one uses std::priority_queue<std::vector<std::string> > >, for example, then not only will each operation perform a logarithmic number of std::string assignments, but, furthermore, any operation (including pop) can render the container useless due to exceptions. Conversely, if one uses std::priority_queue<std::deque<int> > >, then each operation uses incurs a logarithmic number of indirect accesses (through pointers) unnecessarily. It might be better to let the container make a conservative deduction whether to use the structure in the graphic above, labels A1 or A2. There does not seem to be a systematic way to determine what exactly can be done with the priority queue. If p is a priority queue adapting an std::vector, then it is possible to iterate over all values by using &p.top() and &p.top() + p.size(), but this will not work if p is adapting an std::deque; in any case, one cannot use p.begin() and p.end(). If a different sequence is adapted, it is even more difficult to determine what can be done. If p is a priority queue adapting an std::deque, then the reference return by p.top() will remain valid until it is popped, but if p adapts an std::vector, the next push will invalidate it. If a different sequence is adapted, it is even more difficult to determine what can be done. Sequence-based binary heaps can still implement linear-time erase and modify operations. This means that if one needs to erase a small (say logarithmic) number of values, then one might still choose this underlying data structure. Using std::priority_queue, however, this will generally change the order of growth of the entire sequence of operations.
Using
Prerequisites The library contains only header files, and does not require any other libraries except the standard C++ library . All classes are defined in namespace __gnu_pbds. The library internally uses macros beginning with PB_DS, but #undefs anything it #defines (except for header guards). Compiling the library in an environment where macros beginning in PB_DS are defined, may yield unpredictable results in compilation, execution, or both. Further dependencies are necessary to create the visual output for the performance tests. To create these graphs, an additional package is needed: pychart.
Organization The various data structures are organized as follows. Branch-Based basic_branch is an abstract base class for branched-based associative-containers tree is a concrete base class for tree-based associative-containers trie is a concrete base class trie-based associative-containers Hash-Based basic_hash_table is an abstract base class for hash-based associative-containers cc_hash_table is a concrete collision-chaining hash-based associative-containers gp_hash_table is a concrete (general) probing hash-based associative-containers List-Based list_update list-based update-policy associative container Heap-Based priority_queue A priority queue. The hierarchy is composed naturally so that commonality is captured by base classes. Thus operator[] is defined at the base of any hierarchy, since all derived containers support it. Conversely split is defined in basic_branch, since only tree-like containers support it. In addition, there are the following diagnostics classes, used to report errors specific to this library's data structures.
Exception Hierarchy Exception Hierarchy
Tutorial
Basic Use For the most part, the policy-based containers containers in namespace __gnu_pbds have the same interface as the equivalent containers in the standard C++ library, except for the names used for the container classes themselves. For example, this shows basic operations on a collision-chaining hash-based container: #include <ext/pb_ds/assoc_container.h> int main() { __gnu_pbds::cc_hash_table<int, char> c; c[2] = 'b'; assert(c.find(1) == c.end()); }; The container is called __gnu_pbds::cc_hash_table instead of std::unordered_map, since unordered map does not necessarily mean a hash-based map as implied by the C++ library (C++11 or TR1). For example, list-based associative containers, which are very useful for the construction of "multimaps," are also unordered. This snippet shows a red-black tree based container: #include <ext/pb_ds/assoc_container.h> int main() { __gnu_pbds::tree<int, char> c; c[2] = 'b'; assert(c.find(2) != c.end()); }; The container is called tree instead of map since the underlying data structures are being named with specificity. The member function naming convention is to strive to be the same as the equivalent member functions in other C++ standard library containers. The familiar methods are unchanged: begin, end, size, empty, and clear. This isn't to say that things are exactly as one would expect, given the container requirments and interfaces in the C++ standard. The names of containers' policies and policy accessors are different then the usual. For example, if hash_type is some type of hash-based container, then hash_type::hash_fn gives the type of its hash functor, and if obj is some hash-based container object, then obj.get_hash_fn() will return a reference to its hash-functor object. Similarly, if tree_type is some type of tree-based container, then tree_type::cmp_fn gives the type of its comparison functor, and if obj is some tree-based container object, then obj.get_cmp_fn() will return a reference to its comparison-functor object. It would be nice to give names consistent with those in the existing C++ standard (inclusive of TR1). Unfortunately, these standard containers don't consistently name types and methods. For example, std::tr1::unordered_map uses hasher for the hash functor, but std::map uses key_compare for the comparison functor. Also, we could not find an accessor for std::tr1::unordered_map's hash functor, but std::map uses compare for accessing the comparison functor. Instead, __gnu_pbds attempts to be internally consistent, and uses standard-derived terminology if possible. Another source of difference is in scope: __gnu_pbds contains more types of associative containers than the standard C++ library, and more opportunities to configure these new containers, since different types of associative containers are useful in different settings. Namespace __gnu_pbds contains different classes for hash-based containers, tree-based containers, trie-based containers, and list-based containers. Since associative containers share parts of their interface, they are organized as a class hierarchy. Each type or method is defined in the most-common ancestor in which it makes sense. For example, all associative containers support iteration expressed in the following form: const_iterator begin() const; iterator begin(); const_iterator end() const; iterator end(); But not all containers contain or use hash functors. Yet, both collision-chaining and (general) probing hash-based associative containers have a hash functor, so basic_hash_table contains the interface: const hash_fn& get_hash_fn() const; hash_fn& get_hash_fn(); so all hash-based associative containers inherit the same hash-functor accessor methods.
Configuring via Template Parameters In general, each of this library's containers is parametrized by more policies than those of the standard library. For example, the standard hash-based container is parametrized as follows: template<typename Key, typename Mapped, typename Hash, typename Pred, typename Allocator, bool Cache_Hashe_Code> class unordered_map; and so can be configured by key type, mapped type, a functor that translates keys to unsigned integral types, an equivalence predicate, an allocator, and an indicator whether to store hash values with each entry. this library's collision-chaining hash-based container is parametrized as template<typename Key, typename Mapped, typename Hash_Fn, typename Eq_Fn, typename Comb_Hash_Fn, typename Resize_Policy, bool Store_Hash typename Allocator> class cc_hash_table; and so can be configured by the first four types of std::tr1::unordered_map, then a policy for translating the key-hash result into a position within the table, then a policy by which the table resizes, an indicator whether to store hash values with each entry, and an allocator (which is typically the last template parameter in standard containers). Nearly all policy parameters have default values, so this need not be considered for casual use. It is important to note, however, that hash-based containers' policies can dramatically alter their performance in different settings, and that tree-based containers' policies can make them useful for other purposes than just look-up. As opposed to associative containers, priority queues have relatively few configuration options. The priority queue is parametrized as follows: template<typename Value_Type, typename Cmp_Fn,typename Tag, typename Allocator> class priority_queue; The Value_Type, Cmp_Fn, and Allocator parameters are the container's value type, comparison-functor type, and allocator type, respectively; these are very similar to the standard's priority queue. The Tag parameter is different: there are a number of pre-defined tag types corresponding to binary heaps, binomial heaps, etc., and Tag should be instantiated by one of them. Note that as opposed to the std::priority_queue, __gnu_pbds::priority_queue is not a sequence-adapter; it is a regular container.
Querying Container Attributes A containers underlying data structure affect their performance; Unfortunately, they can also affect their interface. When manipulating generically associative containers, it is often useful to be able to statically determine what they can support and what the cannot. Happily, the standard provides a good solution to a similar problem - that of the different behavior of iterators. If It is an iterator, then typename std::iterator_traits<It>::iterator_category is one of a small number of pre-defined tag classes, and typename std::iterator_traits<It>::value_type is the value type to which the iterator "points". Similarly, in this library, if C is a container, then container_traits is a trait class that stores information about the kind of container that is implemented. typename container_traits<C>::container_category is one of a small number of predefined tag structures that uniquely identifies the type of underlying data structure. In most cases, however, the exact underlying data structure is not really important, but what is important is one of its other attributes: whether it guarantees storing elements by key order, for example. For this one can use typename container_traits<C>::order_preserving Also, typename container_traits<C>::invalidation_guarantee is the container's invalidation guarantee. Invalidation guarantees are especially important regarding priority queues, since in this library's design, iterators are practically the only way to manipulate them.
Point and Range Iteration This library differentiates between two types of methods and iterators: point-type, and range-type. For example, find and insert are point-type methods, since they each deal with a specific element; their returned iterators are point-type iterators. begin and end are range-type methods, since they are not used to find a specific element, but rather to go over all elements in a container object; their returned iterators are range-type iterators. Most containers store elements in an order that is determined by their interface. Correspondingly, it is fine that their point-type iterators are synonymous with their range-type iterators. For example, in the following snippet std::for_each(c.find(1), c.find(5), foo); two point-type iterators (returned by find) are used for a range-type purpose - going over all elements whose key is between 1 and 5. Conversely, the above snippet makes no sense for self-organizing containers - ones that order (and reorder) their elements by implementation. It would be nice to have a uniform iterator system that would allow the above snippet to compile only if it made sense. This could trivially be done by specializing std::for_each for the case of iterators returned by std::tr1::unordered_map, but this would only solve the problem for one algorithm and one container. Fundamentally, the problem is that one can loop using a self-organizing container's point-type iterators. This library's containers define two families of iterators: point_const_iterator and point_iterator are the iterator types returned by point-type methods; const_iterator and iterator are the iterator types returned by range-type methods. class <- some container -> { public: ... typedef <- something -> const_iterator; typedef <- something -> iterator; typedef <- something -> point_const_iterator; typedef <- something -> point_iterator; ... public: ... const_iterator begin () const; iterator begin(); point_const_iterator find(...) const; point_iterator find(...); }; For containers whose interface defines sequence order , it is very simple: point-type and range-type iterators are exactly the same, which means that the above snippet will compile if it is used for an order-preserving associative container. For self-organizing containers, however, (hash-based containers as a special example), the preceding snippet will not compile, because their point-type iterators do not support operator++. In any case, both for order-preserving and self-organizing containers, the following snippet will compile: typename Cntnr::point_iterator it = c.find(2); because a range-type iterator can always be converted to a point-type iterator. Distingushing between iterator types also raises the point that a container's iterators might have different invalidation rules concerning their de-referencing abilities and movement abilities. This now corresponds exactly to the question of whether point-type and range-type iterators are valid. As explained above, container_traits allows querying a container for its data structure attributes. The iterator-invalidation guarantees are certainly a property of the underlying data structure, and so container_traits<C>::invalidation_guarantee gives one of three pre-determined types that answer this query.
Examples Additional code examples are provided in the source distribution, as part of the regression and performance testsuite.
Intermediate Use Basic use of maps: basic_map.cc Basic use of sets: basic_set.cc Conditionally erasing values from an associative container object: erase_if.cc Basic use of multimaps: basic_multimap.cc Basic use of multisets: basic_multiset.cc Basic use of priority queues: basic_priority_queue.cc Splitting and joining priority queues: priority_queue_split_join.cc Conditionally erasing values from a priority queue: priority_queue_erase_if.cc
Querying with <classname>container_traits</classname> Using container_traits to query about underlying data structure behavior: assoc_container_traits.cc A non-compiling example showing wrong use of finding keys in hash-based containers: hash_find_neg.cc Using container_traits to query about underlying data structure behavior: priority_queue_container_traits.cc
By Container Method
Hash-Based
size Related Setting the initial size of a hash-based container object: hash_initial_size.cc A non-compiling example showing how not to resize a hash-based container object: hash_resize_neg.cc Resizing the size of a hash-based container object: hash_resize.cc Showing an illegal resize of a hash-based container object: hash_illegal_resize.cc Changing the load factors of a hash-based container object: hash_load_set_change.cc
Hashing Function Related Using a modulo range-hashing function for the case of an unknown skewed key distribution: hash_mod.cc Writing a range-hashing functor for the case of a known skewed key distribution: shift_mask.cc Storing the hash value along with each key: store_hash.cc Writing a ranged-hash functor: ranged_hash.cc
Branch-Based
split or join Related Joining two tree-based container objects: tree_join.cc Splitting a PATRICIA trie container object: trie_split.cc Order statistics while joining two tree-based container objects: tree_order_statistics_join.cc
Node Invariants Using trees for order statistics: tree_order_statistics.cc Augmenting trees to support operations on line intervals: tree_intervals.cc
trie Using a PATRICIA trie for DNA strings: trie_dna.cc Using a PATRICIA trie for finding all entries whose key matches a given prefix: trie_prefix_search.cc
Priority Queues Cross referencing an associative container and a priority queue: priority_queue_xref.cc Cross referencing a vector and a priority queue using a very simple version of Dijkstra's shortest path algorithm: priority_queue_dijkstra.cc
Design
Concepts
Null Policy Classes Associative containers are typically parametrized by various policies. For example, a hash-based associative container is parametrized by a hash-functor, transforming each key into an non-negative numerical type. Each such value is then further mapped into a position within the table. The mapping of a key into a position within the table is therefore a two-step process. In some cases, instantiations are redundant. For example, when the keys are integers, it is possible to use a redundant hash policy, which transforms each key into its value. In some other cases, these policies are irrelevant. For example, a hash-based associative container might transform keys into positions within a table by a different method than the two-step method described above. In such a case, the hash functor is simply irrelevant. When a policy is either redundant or irrelevant, it can be replaced by null_type. For example, a set is an associative container with one of its template parameters (the one for the mapped type) replaced with null_type. Other places simplifications are made possible with this technique include node updates in tree and trie data structures, and hash and probe functions for hash data structures.
Map and Set Semantics
Distinguishing Between Maps and Sets Anyone familiar with the standard knows that there are four kinds of associative containers: maps, sets, multimaps, and multisets. The map datatype associates each key to some data. Sets are associative containers that simply store keys - they do not map them to anything. In the standard, each map class has a corresponding set class. E.g., std::map<int, char> maps each int to a char, but std::set<int, char> simply stores ints. In this library, however, there are no distinct classes for maps and sets. Instead, an associative container's Mapped template parameter is a policy: if it is instantiated by null_type, then it is a "set"; otherwise, it is a "map". E.g., cc_hash_table<int, char> is a "map" mapping each int value to a char, but cc_hash_table<int, null_type> is a type that uniquely stores int values. Once the Mapped template parameter is instantiated by null_type, then the "set" acts very similarly to the standard's sets - it does not map each key to a distinct null_type object. Also, , the container's value_type is essentially its key_type - just as with the standard's sets . The standard's multimaps and multisets allow, respectively, non-uniquely mapping keys and non-uniquely storing keys. As discussed, the reasons why this might be necessary are 1) that a key might be decomposed into a primary key and a secondary key, 2) that a key might appear more than once, or 3) any arbitrary combination of 1)s and 2)s. Correspondingly, one should use 1) "maps" mapping primary keys to secondary keys, 2) "maps" mapping keys to size types, or 3) any arbitrary combination of 1)s and 2)s. Thus, for example, an std::multiset<int> might be used to store multiple instances of integers, but using this library's containers, one might use tree<int, size_t> i.e., a map of ints to size_ts. These "multimaps" and "multisets" might be confusing to anyone familiar with the standard's std::multimap and std::multiset, because there is no clear correspondence between the two. For example, in some cases where one uses std::multiset in the standard, one might use in this library a "multimap" of "multisets" - i.e., a container that maps primary keys each to an associative container that maps each secondary key to the number of times it occurs. When one uses a "multimap," one should choose with care the type of container used for secondary keys.
Alternatives to <classname>std::multiset</classname> and <classname>std::multimap</classname> Brace onself: this library does not contain containers like std::multimap or std::multiset. Instead, these data structures can be synthesized via manipulation of the Mapped template parameter. One maps the unique part of a key - the primary key, into an associative-container of the (originally) non-unique parts of the key - the secondary key. A primary associative-container is an associative container of primary keys; a secondary associative-container is an associative container of secondary keys. Stepping back a bit, and starting in from the beginning. Maps (or sets) allow mapping (or storing) unique-key values. The standard library also supplies associative containers which map (or store) multiple values with equivalent keys: std::multimap, std::multiset, std::tr1::unordered_multimap, and unordered_multiset. We first discuss how these might be used, then why we think it is best to avoid them. Suppose one builds a simple bank-account application that records for each client (identified by an std::string) and account-id (marked by an unsigned long) - the balance in the account (described by a float). Suppose further that ordering this information is not useful, so a hash-based container is preferable to a tree based container. Then one can use std::tr1::unordered_map<std::pair<std::string, unsigned long>, float, ...> which hashes every combination of client and account-id. This might work well, except for the fact that it is now impossible to efficiently list all of the accounts of a specific client (this would practically require iterating over all entries). Instead, one can use std::tr1::unordered_multimap<std::pair<std::string, unsigned long>, float, ...> which hashes every client, and decides equivalence based on client only. This will ensure that all accounts belonging to a specific user are stored consecutively. Also, suppose one wants an integers' priority queue (a container that supports push, pop, and top operations, the last of which returns the largest int) that also supports operations such as find and lower_bound. A reasonable solution is to build an adapter over std::set<int>. In this adapter, push will just call the tree-based associative container's insert method; pop will call its end method, and use it to return the preceding element (which must be the largest). Then this might work well, except that the container object cannot hold multiple instances of the same integer (push(4), will be a no-op if 4 is already in the container object). If multiple keys are necessary, then one might build the adapter over an std::multiset<int>. The standard library's non-unique-mapping containers are useful when (1) a key can be decomposed in to a primary key and a secondary key, (2) a key is needed multiple times, or (3) any combination of (1) and (2). The graphic below shows how the standard library's container design works internally; in this figure nodes shaded equally represent equivalent-key values. Equivalent keys are stored consecutively using the properties of the underlying data structure: binary search trees (label A) store equivalent-key values consecutively (in the sense of an in-order walk) naturally; collision-chaining hash tables (label B) store equivalent-key values in the same bucket, the bucket can be arranged so that equivalent-key values are consecutive.
Non-unique Mapping Standard Containers Non-unique Mapping Standard Containers
Put differently, the standards' non-unique mapping associative-containers are associative containers that map primary keys to linked lists that are embedded into the container. The graphic below shows again the two containers from the first graphic above, this time with the embedded linked lists of the grayed nodes marked explicitly.
Effect of embedded lists in <classname>std::multimap</classname> Effect of embedded lists in std::multimap
These embedded linked lists have several disadvantages. The underlying data structure embeds the linked lists according to its own consideration, which means that the search path for a value might include several different equivalent-key values. For example, the search path for the the black node in either of the first graphic, labels A or B, includes more than a single gray node. The links of the linked lists are the underlying data structures' nodes, which typically are quite structured. In the case of tree-based containers (the grapic above, label B), each "link" is actually a node with three pointers (one to a parent and two to children), and a relatively-complicated iteration algorithm. The linked lists, therefore, can take up quite a lot of memory, and iterating over all values equal to a given key (through the return value of the standard library's equal_range) can be expensive. The primary key is stored multiply; this uses more memory. Finally, the interface of this design excludes several useful underlying data structures. Of all the unordered self-organizing data structures, practically only collision-chaining hash tables can (efficiently) guarantee that equivalent-key values are stored consecutively. The above reasons hold even when the ratio of secondary keys to primary keys (or average number of identical keys) is small, but when it is large, there are more severe problems: The underlying data structures order the links inside each embedded linked-lists according to their internal considerations, which effectively means that each of the links is unordered. Irrespective of the underlying data structure, searching for a specific value can degrade to linear complexity. Similarly to the above point, it is impossible to apply to the secondary keys considerations that apply to primary keys. For example, it is not possible to maintain secondary keys by sorted order. While the interface "understands" that all equivalent-key values constitute a distinct list (through equal_range), the underlying data structure typically does not. This means that operations such as erasing from a tree-based container all values whose keys are equivalent to a a given key can be super-linear in the size of the tree; this is also true also for several other operations that target a specific list. In this library, all associative containers map (or store) unique-key values. One can (1) map primary keys to secondary associative-containers (containers of secondary keys) or non-associative containers (2) map identical keys to a size-type representing the number of times they occur, or (3) any combination of (1) and (2). Instead of allowing multiple equivalent-key values, this library supplies associative containers based on underlying data structures that are suitable as secondary associative-containers. In the figure below, labels A and B show the equivalent underlying data structures in this library, as mapped to the first graphic above. Labels A and B, respectively. Each shaded box represents some size-type or secondary associative-container.
Non-unique Mapping Containers Non-unique Mapping Containers
In the first example above, then, one would use an associative container mapping each user to an associative container which maps each application id to a start time (see example/basic_multimap.cc); in the second example, one would use an associative container mapping each int to some size-type indicating the number of times it logically occurs (see example/basic_multiset.cc. See the discussion in list-based container types for containers especially suited as secondary associative-containers.
Iterator Semantics
Point and Range Iterators Iterator concepts are bifurcated in this design, and are comprised of point-type and range-type iteration. A point-type iterator is an iterator that refers to a specific element as returned through an associative-container's find method. A range-type iterator is an iterator that is used to go over a sequence of elements, as returned by a container's find method. A point-type method is a method that returns a point-type iterator; a range-type method is a method that returns a range-type iterator. For most containers, these types are synonymous; for self-organizing containers, such as hash-based containers or priority queues, these are inherently different (in any implementation, including that of C++ standard library components), but in this design, it is made explicit. They are distinct types.
Distinguishing Point and Range Iterators When using this library, is necessary to differentiate between two types of methods and iterators: point-type methods and iterators, and range-type methods and iterators. Each associative container's interface includes the methods: point_const_iterator find(const_key_reference r_key) const; point_iterator find(const_key_reference r_key); std::pair<point_iterator,bool> insert(const_reference r_val); The relationship between these iterator types varies between container types. The figure below shows the most general invariant between point-type and range-type iterators: In A iterator, can always be converted to point_iterator. In B shows invariants for order-preserving containers: point-type iterators are synonymous with range-type iterators. Orthogonally, Cshows invariants for "set" containers: iterators are synonymous with const iterators.
Point Iterator Hierarchy Point Iterator Hierarchy
Note that point-type iterators in self-organizing containers (hash-based associative containers) lack movement operators, such as operator++ - in fact, this is the reason why this library differentiates from the standard C++ librarys design on this point. Typically, one can determine an iterator's movement capabilities using std::iterator_traits<It>iterator_category, which is a struct indicating the iterator's movement capabilities. Unfortunately, none of the standard predefined categories reflect a pointer's not having any movement capabilities whatsoever. Consequently, pb_ds adds a type trivial_iterator_tag (whose name is taken from a concept in C++ standardese, which is the category of iterators with no movement capabilities.) All other standard C++ library tags, such as forward_iterator_tag retain their common use.
Invalidation Guarantees If one manipulates a container object, then iterators previously obtained from it can be invalidated. In some cases a previously-obtained iterator cannot be de-referenced; in other cases, the iterator's next or previous element might have changed unpredictably. This corresponds exactly to the question whether a point-type or range-type iterator (see previous concept) is valid or not. In this design, one can query a container (in compile time) about its invalidation guarantees. Given three different types of associative containers, a modifying operation (in that example, erase) invalidated iterators in three different ways: the iterator of one container remained completely valid - it could be de-referenced and incremented; the iterator of a different container could not even be de-referenced; the iterator of the third container could be de-referenced, but its "next" iterator changed unpredictably. Distinguishing between find and range types allows fine-grained invalidation guarantees, because these questions correspond exactly to the question of whether point-type iterators and range-type iterators are valid. The graphic below shows tags corresponding to different types of invalidation guarantees.
Invalidation Guarantee Tags Hierarchy Invalidation Guarantee Tags Hierarchy
basic_invalidation_guarantee corresponds to a basic guarantee that a point-type iterator, a found pointer, or a found reference, remains valid as long as the container object is not modified. point_invalidation_guarantee corresponds to a guarantee that a point-type iterator, a found pointer, or a found reference, remains valid even if the container object is modified. range_invalidation_guarantee corresponds to a guarantee that a range-type iterator remains valid even if the container object is modified. To find the invalidation guarantee of a container, one can use typename container_traits<Cntnr>::invalidation_guarantee Note that this hierarchy corresponds to the logic it represents: if a container has range-invalidation guarantees, then it must also have find invalidation guarantees; correspondingly, its invalidation guarantee (in this case range_invalidation_guarantee) can be cast to its base class (in this case point_invalidation_guarantee). This means that this this hierarchy can be used easily using standard metaprogramming techniques, by specializing on the type of invalidation_guarantee. These types of problems were addressed, in a more general setting, in - Item 2. In our opinion, an invalidation-guarantee hierarchy would solve these problems in all container types - not just associative containers.
Genericity The design attempts to address the following problem of data-structure genericity. When writing a function manipulating a generic container object, what is the behavior of the object? Suppose one writes template<typename Cntnr> void some_op_sequence(Cntnr &r_container) { ... } then one needs to address the following questions in the body of some_op_sequence: Which types and methods does Cntnr support? Containers based on hash tables can be queries for the hash-functor type and object; this is meaningless for tree-based containers. Containers based on trees can be split, joined, or can erase iterators and return the following iterator; this cannot be done by hash-based containers. What are the exception and invalidation guarantees of Cntnr? A container based on a probing hash-table invalidates all iterators when it is modified; this is not the case for containers based on node-based trees. Containers based on a node-based tree can be split or joined without exceptions; this is not the case for containers based on vector-based trees. How does the container maintain its elements? Tree-based and Trie-based containers store elements by key order; others, typically, do not. A container based on a splay trees or lists with update policies "cache" "frequently accessed" elements; containers based on most other underlying data structures do not. How does one query a container about characteristics and capabilities? What is the relationship between two different data structures, if anything? The remainder of this section explains these issues in detail.
Tag Tags are very useful for manipulating generic types. For example, if It is an iterator class, then typename It::iterator_category or typename std::iterator_traits<It>::iterator_category will yield its category, and typename std::iterator_traits<It>::value_type will yield its value type. This library contains a container tag hierarchy corresponding to the diagram below.
Container Tag Hierarchy Container Tag Hierarchy
Given any container Cntnr, the tag of the underlying data structure can be found via typename Cntnr::container_category.
Traits Additionally, a traits mechanism can be used to query a container type for its attributes. Given any container Cntnr, then <Cntnr> is a traits class identifying the properties of the container. To find if a container can throw when a key is erased (which is true for vector-based trees, for example), one can use container_traits<Cntnr>::erase_can_throw Some of the definitions in container_traits are dependent on other definitions. If container_traits<Cntnr>::order_preserving is true (which is the case for containers based on trees and tries), then the container can be split or joined; in this case, container_traits<Cntnr>::split_join_can_throw indicates whether splits or joins can throw exceptions (which is true for vector-based trees); otherwise container_traits<Cntnr>::split_join_can_throw will yield a compilation error. (This is somewhat similar to a compile-time version of the COM model).
By Container
hash
Interface The collision-chaining hash-based container has the following declaration. template< typename Key, typename Mapped, typename Hash_Fn = std::hash<Key>, typename Eq_Fn = std::equal_to<Key>, typename Comb_Hash_Fn = direct_mask_range_hashing<> typename Resize_Policy = default explained below. bool Store_Hash = false, typename Allocator = std::allocator<char> > class cc_hash_table; The parameters have the following meaning: Key is the key type. Mapped is the mapped-policy. Hash_Fn is a key hashing functor. Eq_Fn is a key equivalence functor. Comb_Hash_Fn is a range-hashing_functor; it describes how to translate hash values into positions within the table. Resize_Policy describes how a container object should change its internal size. Store_Hash indicates whether the hash value should be stored with each entry. Allocator is an allocator type. The probing hash-based container has the following declaration. template< typename Key, typename Mapped, typename Hash_Fn = std::hash<Key>, typename Eq_Fn = std::equal_to<Key>, typename Comb_Probe_Fn = direct_mask_range_hashing<> typename Probe_Fn = default explained below. typename Resize_Policy = default explained below. bool Store_Hash = false, typename Allocator = std::allocator<char> > class gp_hash_table; The parameters are identical to those of the collision-chaining container, except for the following. Comb_Probe_Fn describes how to transform a probe sequence into a sequence of positions within the table. Probe_Fn describes a probe sequence policy. Some of the default template values depend on the values of other parameters, and are explained below.
Details
Hash Policies
General Following is an explanation of some functions which hashing involves. The graphic below illustrates the discussion.
Hash functions, ranged-hash functions, and range-hashing functions Hash functions, ranged-hash functions, and range-hashing functions
Let U be a domain (e.g., the integers, or the strings of 3 characters). A hash-table algorithm needs to map elements of U "uniformly" into the range [0,..., m - 1] (where m is a non-negative integral value, and is, in general, time varying). I.e., the algorithm needs a ranged-hash function f : U × Z+ → Z+ such that for any u in U , 0 ≤ f(u, m) ≤ m - 1 and which has "good uniformity" properties (say .) One common solution is to use the composition of the hash function h : U → Z+ , which maps elements of U into the non-negative integrals, and g : Z+ × Z+ → Z+, which maps a non-negative hash value, and a non-negative range upper-bound into a non-negative integral in the range between 0 (inclusive) and the range upper bound (exclusive), i.e., for any r in Z+, 0 ≤ g(r, m) ≤ m - 1 The resulting ranged-hash function, is Ranged Hash Function f(u , m) = g(h(u), m) From the above, it is obvious that given g and h, f can always be composed (however the converse is not true). The standard's hash-based containers allow specifying a hash function, and use a hard-wired range-hashing function; the ranged-hash function is implicitly composed. The above describes the case where a key is to be mapped into a single position within a hash table, e.g., in a collision-chaining table. In other cases, a key is to be mapped into a sequence of positions within a table, e.g., in a probing table. Similar terms apply in this case: the table requires a ranged probe function, mapping a key into a sequence of positions withing the table. This is typically achieved by composing a hash function mapping the key into a non-negative integral type, a probe function transforming the hash value into a sequence of hash values, and a range-hashing function transforming the sequence of hash values into a sequence of positions.
Range Hashing Some common choices for range-hashing functions are the division, multiplication, and middle-square methods (), defined as Range-Hashing, Division Method g(r, m) = r mod m g(r, m) = ⌈ u/v ( a r mod v ) ⌉ and g(r, m) = ⌈ u/v ( r2 mod v ) ⌉ respectively, for some positive integrals u and v (typically powers of 2), and some a. Each of these range-hashing functions works best for some different setting. The division method (see above) is a very common choice. However, even this single method can be implemented in two very different ways. It is possible to implement using the low level % (modulo) operation (for any m), or the low level & (bit-mask) operation (for the case where m is a power of 2), i.e., Division via Prime Modulo g(r, m) = r % m and Division via Bit Mask g(r, m) = r & m - 1, (with m = 2k for some k) respectively. The % (modulo) implementation has the advantage that for m a prime far from a power of 2, g(r, m) is affected by all the bits of r (minimizing the chance of collision). It has the disadvantage of using the costly modulo operation. This method is hard-wired into SGI's implementation . The & (bit-mask) implementation has the advantage of relying on the fast bit-wise and operation. It has the disadvantage that for g(r, m) is affected only by the low order bits of r. This method is hard-wired into Dinkumware's implementation.
Ranged Hash In cases it is beneficial to allow the client to directly specify a ranged-hash hash function. It is true, that the writer of the ranged-hash function cannot rely on the values of m having specific numerical properties suitable for hashing (in the sense used in ), since the values of m are determined by a resize policy with possibly orthogonal considerations. There are two cases where a ranged-hash function can be superior. The firs is when using perfect hashing: the second is when the values of m can be used to estimate the "general" number of distinct values required. This is described in the following. Let s = [ s0,..., st - 1] be a string of t characters, each of which is from domain S. Consider the following ranged-hash function: A Standard String Hash Function f1(s, m) = ∑ i = 0t - 1 si ai mod m where a is some non-negative integral value. This is the standard string-hashing function used in SGI's implementation (with a = 5). Its advantage is that it takes into account all of the characters of the string. Now assume that s is the string representation of a of a long DNA sequence (and so S = {'A', 'C', 'G', 'T'}). In this case, scanning the entire string might be prohibitively expensive. A possible alternative might be to use only the first k characters of the string, where |S|k ≥ m , i.e., using the hash function Only k String DNA Hash f2(s, m) = ∑ i = 0k - 1 si ai mod m requiring scanning over only k = log4( m ) characters. Other more elaborate hash-functions might scan k characters starting at a random position (determined at each resize), or scanning k random positions (determined at each resize), i.e., using f3(s, m) = ∑ i = r0r0 + k - 1 si ai mod m , or f4(s, m) = ∑ i = 0k - 1 sri ari mod m , respectively, for r0,..., rk-1 each in the (inclusive) range [0,...,t-1]. It should be noted that the above functions cannot be decomposed as per a ranged hash composed of hash and range hashing.
Implementation This sub-subsection describes the implementation of the above in this library. It first explains range-hashing functions in collision-chaining tables, then ranged-hash functions in collision-chaining tables, then probing-based tables, and finally lists the relevant classes in this library.
Range-Hashing and Ranged-Hashes in Collision-Chaining Tables cc_hash_table is parametrized by Hash_Fn and Comb_Hash_Fn, a hash functor and a combining hash functor, respectively. In general, Comb_Hash_Fn is considered a range-hashing functor. cc_hash_table synthesizes a ranged-hash function from Hash_Fn and Comb_Hash_Fn. The figure below shows an insert sequence diagram for this case. The user inserts an element (point A), the container transforms the key into a non-negative integral using the hash functor (points B and C), and transforms the result into a position using the combining functor (points D and E).
Insert hash sequence diagram Insert hash sequence diagram
If cc_hash_table's hash-functor, Hash_Fn is instantiated by null_type , then Comb_Hash_Fn is taken to be a ranged-hash function. The graphic below shows an insert sequence diagram. The user inserts an element (point A), the container transforms the key into a position using the combining functor (points B and C).
Insert hash sequence diagram with a null policy Insert hash sequence diagram with a null policy
Probing tables gp_hash_table is parametrized by Hash_Fn, Probe_Fn, and Comb_Probe_Fn. As before, if Hash_Fn and Probe_Fn are both null_type, then Comb_Probe_Fn is a ranged-probe functor. Otherwise, Hash_Fn is a hash functor, Probe_Fn is a functor for offsets from a hash value, and Comb_Probe_Fn transforms a probe sequence into a sequence of positions within the table.
Pre-Defined Policies This library contains some pre-defined classes implementing range-hashing and probing functions: direct_mask_range_hashing and direct_mod_range_hashing are range-hashing functions based on a bit-mask and a modulo operation, respectively. linear_probe_fn, and quadratic_probe_fn are a linear probe and a quadratic probe function, respectively. The graphic below shows the relationships.
Hash policy class diagram Hash policy class diagram
Resize Policies
General Hash-tables, as opposed to trees, do not naturally grow or shrink. It is necessary to specify policies to determine how and when a hash table should change its size. Usually, resize policies can be decomposed into orthogonal policies: A size policy indicating how a hash table should grow (e.g., it should multiply by powers of 2). A trigger policy indicating when a hash table should grow (e.g., a load factor is exceeded).
Size Policies Size policies determine how a hash table changes size. These policies are simple, and there are relatively few sensible options. An exponential-size policy (with the initial size and growth factors both powers of 2) works well with a mask-based range-hashing function, and is the hard-wired policy used by Dinkumware. A prime-list based policy works well with a modulo-prime range hashing function and is the hard-wired policy used by SGI's implementation.
Trigger Policies Trigger policies determine when a hash table changes size. Following is a description of two policies: load-check policies, and collision-check policies. Load-check policies are straightforward. The user specifies two factors, Αmin and Αmax, and the hash table maintains the invariant that Αmin ≤ (number of stored elements) / (hash-table size) ≤ Αmaxload factor min max Collision-check policies work in the opposite direction of load-check policies. They focus on keeping the number of collisions moderate and hoping that the size of the table will not grow very large, instead of keeping a moderate load-factor and hoping that the number of collisions will be small. A maximal collision-check policy resizes when the longest probe-sequence grows too large. Consider the graphic below. Let the size of the hash table be denoted by m, the length of a probe sequence be denoted by k, and some load factor be denoted by Α. We would like to calculate the minimal length of k, such that if there were Α m elements in the hash table, a probe sequence of length k would be found with probability at most 1/m.
Balls and bins Balls and bins
Denote the probability that a probe sequence of length k appears in bin i by pi, the length of the probe sequence of bin i by li, and assume uniform distribution. Then Probability of Probe Sequence of Length k p1 = P(l1 ≥ k) = P(l1 ≥ α ( 1 + k / α - 1) ≤ (a) e ^ ( - ( α ( k / α - 1 )2 ) /2) where (a) follows from the Chernoff bound (). To calculate the probability that some bin contains a probe sequence greater than k, we note that the li are negatively-dependent () . Let I(.) denote the indicator function. Then Probability Probe Sequence in Some Bin P( existsi li ≥ k ) = P ( ∑ i = 1m I(li ≥ k) ≥ 1 ) = P ( ∑ i = 1m I ( li ≥ k ) ≥ m p1 ( 1 + 1 / (m p1) - 1 ) ) ≤ (a) e ^ ( ( - m p1 ( 1 / (m p1) - 1 ) 2 ) / 2 ) , where (a) follows from the fact that the Chernoff bound can be applied to negatively-dependent variables (). Inserting the first probability equation into the second one, and equating with 1/m, we obtain k ~ √ ( 2 α ln 2 m ln(m) ) ) .
Implementation This sub-subsection describes the implementation of the above in this library. It first describes resize policies and their decomposition into trigger and size policies, then describes pre-defined classes, and finally discusses controlled access the policies' internals.
Decomposition Each hash-based container is parametrized by a Resize_Policy parameter; the container derives publicly from Resize_Policy. For example: cc_hash_table<typename Key, typename Mapped, ... typename Resize_Policy ...> : public Resize_Policy As a container object is modified, it continuously notifies its Resize_Policy base of internal changes (e.g., collisions encountered and elements being inserted). It queries its Resize_Policy base whether it needs to be resized, and if so, to what size. The graphic below shows a (possible) sequence diagram of an insert operation. The user inserts an element; the hash table notifies its resize policy that a search has started (point A); in this case, a single collision is encountered - the table notifies its resize policy of this (point B); the container finally notifies its resize policy that the search has ended (point C); it then queries its resize policy whether a resize is needed, and if so, what is the new size (points D to G); following the resize, it notifies the policy that a resize has completed (point H); finally, the element is inserted, and the policy notified (point I).
Insert resize sequence diagram Insert resize sequence diagram
In practice, a resize policy can be usually orthogonally decomposed to a size policy and a trigger policy. Consequently, the library contains a single class for instantiating a resize policy: hash_standard_resize_policy is parametrized by Size_Policy and Trigger_Policy, derives publicly from both, and acts as a standard delegate () to these policies. The two graphics immediately below show sequence diagrams illustrating the interaction between the standard resize policy and its trigger and size policies, respectively.
Standard resize policy trigger sequence diagram Standard resize policy trigger sequence diagram
Standard resize policy size sequence diagram Standard resize policy size sequence diagram
Predefined Policies The library includes the following instantiations of size and trigger policies: hash_load_check_resize_trigger implements a load check trigger policy. cc_hash_max_collision_check_resize_trigger implements a collision check trigger policy. hash_exponential_size_policy implements an exponential-size policy (which should be used with mask range hashing). hash_prime_size_policy implementing a size policy based on a sequence of primes (which should be used with mod range hashing The graphic below gives an overall picture of the resize-related classes. basic_hash_table is parametrized by Resize_Policy, which it subclasses publicly. This class is currently instantiated only by hash_standard_resize_policy. hash_standard_resize_policy itself is parametrized by Trigger_Policy and Size_Policy. Currently, Trigger_Policy is instantiated by hash_load_check_resize_trigger, or cc_hash_max_collision_check_resize_trigger; Size_Policy is instantiated by hash_exponential_size_policy, or hash_prime_size_policy.
Controling Access to Internals There are cases where (controlled) access to resize policies' internals is beneficial. E.g., it is sometimes useful to query a hash-table for the table's actual size (as opposed to its size() - the number of values it currently holds); it is sometimes useful to set a table's initial size, externally resize it, or change load factors. Clearly, supporting such methods both decreases the encapsulation of hash-based containers, and increases the diversity between different associative-containers' interfaces. Conversely, omitting such methods can decrease containers' flexibility. In order to avoid, to the extent possible, the above conflict, the hash-based containers themselves do not address any of these questions; this is deferred to the resize policies, which are easier to change or replace. Thus, for example, neither cc_hash_table nor gp_hash_table contain methods for querying the actual size of the table; this is deferred to hash_standard_resize_policy. Furthermore, the policies themselves are parametrized by template arguments that determine the methods they support ( shows techniques for doing so). hash_standard_resize_policy is parametrized by External_Size_Access that determines whether it supports methods for querying the actual size of the table or resizing it. hash_load_check_resize_trigger is parametrized by External_Load_Access that determines whether it supports methods for querying or modifying the loads. cc_hash_max_collision_check_resize_trigger is parametrized by External_Load_Access that determines whether it supports methods for querying the load. Some operations, for example, resizing a container at run time, or changing the load factors of a load-check trigger policy, require the container itself to resize. As mentioned above, the hash-based containers themselves do not contain these types of methods, only their resize policies. Consequently, there must be some mechanism for a resize policy to manipulate the hash-based container. As the hash-based container is a subclass of the resize policy, this is done through virtual methods. Each hash-based container has a private virtual method: virtual void do_resize (size_type new_size); which resizes the container. Implementations of Resize_Policy can export public methods for resizing the container externally; these methods internally call do_resize to resize the table.
Policy Interactions Hash-tables are unfortunately especially susceptible to choice of policies. One of the more complicated aspects of this is that poor combinations of good policies can form a poor container. Following are some considerations.
probe/size/trigger Some combinations do not work well for probing containers. For example, combining a quadratic probe policy with an exponential size policy can yield a poor container: when an element is inserted, a trigger policy might decide that there is no need to resize, as the table still contains unused entries; the probe sequence, however, might never reach any of the unused entries. Unfortunately, this library cannot detect such problems at compilation (they are halting reducible). It therefore defines an exception class insert_error to throw an exception in this case.
hash/trigger Some trigger policies are especially susceptible to poor hash functions. Suppose, as an extreme case, that the hash function transforms each key to the same hash value. After some inserts, a collision detecting policy will always indicate that the container needs to grow. The library, therefore, by design, limits each operation to one resize. For each insert, for example, it queries only once whether a resize is needed.
equivalence functors/storing hash values/hash cc_hash_table and gp_hash_table are parametrized by an equivalence functor and by a Store_Hash parameter. If the latter parameter is true, then the container stores with each entry a hash value, and uses this value in case of collisions to determine whether to apply a hash value. This can lower the cost of collision for some types, but increase the cost of collisions for other types. If a ranged-hash function or ranged probe function is directly supplied, however, then it makes no sense to store the hash value with each entry. This library's container will fail at compilation, by design, if this is attempted.
size/load-check trigger Assume a size policy issues an increasing sequence of sizes a, a q, a q1, a q2, ... For example, an exponential size policy might issue the sequence of sizes 8, 16, 32, 64, ... If a load-check trigger policy is used, with loads αmin and αmax, respectively, then it is a good idea to have: αmax ~ 1 / q αmin < 1 / (2 q) This will ensure that the amortized hash cost of each modifying operation is at most approximately 3. αmin ~ αmax is, in any case, a bad choice, and αmin > α max is horrendous.
tree
Interface The tree-based container has the following declaration: template< typename Key, typename Mapped, typename Cmp_Fn = std::less<Key>, typename Tag = rb_tree_tag, template< typename Const_Node_Iterator, typename Node_Iterator, typename Cmp_Fn_, typename Allocator_> class Node_Update = null_node_update, typename Allocator = std::allocator<char> > class tree; The parameters have the following meaning: Key is the key type. Mapped is the mapped-policy. Cmp_Fn is a key comparison functor Tag specifies which underlying data structure to use. Node_Update is a policy for updating node invariants. Allocator is an allocator type. The Tag parameter specifies which underlying data structure to use. Instantiating it by rb_tree_tag, splay_tree_tag, or ov_tree_tag, specifies an underlying red-black tree, splay tree, or ordered-vector tree, respectively; any other tag is illegal. Note that containers based on the former two contain more types and methods than the latter (e.g., reverse_iterator and rbegin), and different exception and invalidation guarantees.
Details
Node Invariants Consider the two trees in the graphic below, labels A and B. The first is a tree of floats; the second is a tree of pairs, each signifying a geometric line interval. Each element in a tree is referred to as a node of the tree. Of course, each of these trees can support the usual queries: the first can easily search for 0.4; the second can easily search for std::make_pair(10, 41). Each of these trees can efficiently support other queries. The first can efficiently determine that the 2rd key in the tree is 0.3; the second can efficiently determine whether any of its intervals overlaps std::make_pair(29,42) (useful in geometric applications or distributed file systems with leases, for example). It should be noted that an std::set can only solve these types of problems with linear complexity. In order to do so, each tree stores some metadata in each node, and maintains node invariants (see .) The first stores in each node the size of the sub-tree rooted at the node; the second stores at each node the maximal endpoint of the intervals at the sub-tree rooted at the node.
Tree node invariants Tree node invariants
Supporting such trees is difficult for a number of reasons: There must be a way to specify what a node's metadata should be (if any). Various operations can invalidate node invariants. The graphic below shows how a right rotation, performed on A, results in B, with nodes x and y having corrupted invariants (the grayed nodes in C). The graphic shows how an insert, performed on D, results in E, with nodes x and y having corrupted invariants (the grayed nodes in F). It is not feasible to know outside the tree the effect of an operation on the nodes of the tree. The search paths of standard associative containers are defined by comparisons between keys, and not through metadata. It is not feasible to know in advance which methods trees can support. Besides the usual find method, the first tree can support a find_by_order method, while the second can support an overlaps method.
Tree node invalidation Tree node invalidation
These problems are solved by a combination of two means: node iterators, and template-template node updater parameters.
Node Iterators Each tree-based container defines two additional iterator types, const_node_iterator and node_iterator. These iterators allow descending from a node to one of its children. Node iterator allow search paths different than those determined by the comparison functor. The tree supports the methods: const_node_iterator node_begin() const; node_iterator node_begin(); const_node_iterator node_end() const; node_iterator node_end(); The first pairs return node iterators corresponding to the root node of the tree; the latter pair returns node iterators corresponding to a just-after-leaf node.
Node Updator The tree-based containers are parametrized by a Node_Update template-template parameter. A tree-based container instantiates Node_Update to some node_update class, and publicly subclasses node_update. The graphic below shows this scheme, as well as some predefined policies (which are explained below).
A tree and its update policy A tree and its update policy
node_update (an instantiation of Node_Update) must define metadata_type as the type of metadata it requires. For order statistics, e.g., metadata_type might be size_t. The tree defines within each node a metadata_type object. node_update must also define the following method for restoring node invariants: void operator()(node_iterator nd_it, const_node_iterator end_nd_it) In this method, nd_it is a node_iterator corresponding to a node whose A) all descendants have valid invariants, and B) its own invariants might be violated; end_nd_it is a const_node_iterator corresponding to a just-after-leaf node. This method should correct the node invariants of the node pointed to by nd_it. For example, say node x in the graphic below label A has an invalid invariant, but its' children, y and z have valid invariants. After the invocation, all three nodes should have valid invariants, as in label B.
Restoring node invariants Restoring node invariants
When a tree operation might invalidate some node invariant, it invokes this method in its node_update base to restore the invariant. For example, the graphic below shows an insert operation (point A); the tree performs some operations, and calls the update functor three times (points B, C, and D). (It is well known that any insert, erase, split or join, can restore all node invariants by a small number of node invariant updates () .
Insert update sequence Insert update sequence
To complete the description of the scheme, three questions need to be answered: How can a tree which supports order statistics define a method such as find_by_order? How can the node updater base access methods of the tree? How can the following cyclic dependency be resolved? node_update is a base class of the tree, yet it uses node iterators defined in the tree (its child). The first two questions are answered by the fact that node_update (an instantiation of Node_Update) is a public base class of the tree. Consequently: Any public methods of node_update are automatically methods of the tree (). Thus an order-statistics node updater, tree_order_statistics_node_update defines the find_by_order method; any tree instantiated by this policy consequently supports this method as well. In C++, if a base class declares a method as virtual, it is virtual in its subclasses. If node_update needs to access one of the tree's methods, say the member function end, it simply declares that method as virtual abstract. The cyclic dependency is solved through template-template parameters. Node_Update is parametrized by the tree's node iterators, its comparison functor, and its allocator type. Thus, instantiations of Node_Update have all information required. This library assumes that constructing a metadata object and modifying it are exception free. Suppose that during some method, say insert, a metadata-related operation (e.g., changing the value of a metadata) throws an exception. Ack! Rolling back the method is unusually complex. Previously, a distinction was made between redundant policies and null policies. Node invariants show a case where null policies are required. Assume a regular tree is required, one which need not support order statistics or interval overlap queries. Seemingly, in this case a redundant policy - a policy which doesn't affect nodes' contents would suffice. This, would lead to the following drawbacks: Each node would carry a useless metadata object, wasting space. The tree cannot know if its Node_Update policy actually modifies a node's metadata (this is halting reducible). In the graphic below, assume the shaded node is inserted. The tree would have to traverse the useless path shown to the root, applying redundant updates all the way.
Useless update path Useless update path
A null policy class, null_node_update solves both these problems. The tree detects that node invariants are irrelevant, and defines all accordingly.
Split and Join Tree-based containers support split and join methods. It is possible to split a tree so that it passes all nodes with keys larger than a given key to a different tree. These methods have the following advantages over the alternative of externally inserting to the destination tree and erasing from the source tree: These methods are efficient - red-black trees are split and joined in poly-logarithmic complexity; ordered-vector trees are split and joined at linear complexity. The alternatives have super-linear complexity. Aside from orders of growth, these operations perform few allocations and de-allocations. For red-black trees, allocations are not performed, and the methods are exception-free.
Trie
Interface The trie-based container has the following declaration: template<typename Key, typename Mapped, typename Cmp_Fn = std::less<Key>, typename Tag = pat_trie_tag, template<typename Const_Node_Iterator, typename Node_Iterator, typename E_Access_Traits_, typename Allocator_> class Node_Update = null_node_update, typename Allocator = std::allocator<char> > class trie; The parameters have the following meaning: Key is the key type. Mapped is the mapped-policy. E_Access_Traits is described in below. Tag specifies which underlying data structure to use, and is described shortly. Node_Update is a policy for updating node invariants. This is described below. Allocator is an allocator type. The Tag parameter specifies which underlying data structure to use. Instantiating it by pat_trie_tag, specifies an underlying PATRICIA trie (explained shortly); any other tag is currently illegal. Following is a description of a (PATRICIA) trie (this implementation follows and ). A (PATRICIA) trie is similar to a tree, but with the following differences: It explicitly views keys as a sequence of elements. E.g., a trie can view a string as a sequence of characters; a trie can view a number as a sequence of bits. It is not (necessarily) binary. Each node has fan-out n + 1, where n is the number of distinct elements. It stores values only at leaf nodes. Internal nodes have the properties that A) each has at least two children, and B) each shares the same prefix with any of its descendant. A (PATRICIA) trie has some useful properties: It can be configured to use large node fan-out, giving it very efficient find performance (albeit at insertion complexity and size). It works well for common-prefix keys. It can support efficiently queries such as which keys match a certain prefix. This is sometimes useful in file systems and routers, and for "type-ahead" aka predictive text matching on mobile devices.
Details
Element Access Traits A trie inherently views its keys as sequences of elements. For example, a trie can view a string as a sequence of characters. A trie needs to map each of n elements to a number in {0, n - 1}. For example, a trie can map a character c to static_cast<size_t>(c). Seemingly, then, a trie can assume that its keys support (const) iterators, and that the value_type of this iterator can be cast to a size_t. There are several reasons, though, to decouple the mechanism by which the trie accesses its keys' elements from the trie: In some cases, the numerical value of an element is inappropriate. Consider a trie storing DNA strings. It is logical to use a trie with a fan-out of 5 = 1 + |{'A', 'C', 'G', 'T'}|. This requires mapping 'T' to 3, though. In some cases the keys' iterators are different than what is needed. For example, a trie can be used to search for common suffixes, by using strings' reverse_iterator. As another example, a trie mapping UNICODE strings would have a huge fan-out if each node would branch on a UNICODE character; instead, one can define an iterator iterating over 8-bit (or less) groups. trie is, consequently, parametrized by E_Access_Traits - traits which instruct how to access sequences' elements. string_trie_e_access_traits is a traits class for strings. Each such traits define some types, like: typename E_Access_Traits::const_iterator is a const iterator iterating over a key's elements. The traits class must also define methods for obtaining an iterator to the first and last element of a key. The graphic below shows a (PATRICIA) trie resulting from inserting the words: "I wish that I could ever see a poem lovely as a trie" (which, unfortunately, does not rhyme). The leaf nodes contain values; each internal node contains two typename E_Access_Traits::const_iterator objects, indicating the maximal common prefix of all keys in the sub-tree. For example, the shaded internal node roots a sub-tree with leafs "a" and "as". The maximal common prefix is "a". The internal node contains, consequently, to const iterators, one pointing to 'a', and the other to 's'.
A PATRICIA trie A PATRICIA trie
Node Invariants Trie-based containers support node invariants, as do tree-based containers. There are two minor differences, though, which, unfortunately, thwart sharing them sharing the same node-updating policies: A trie's Node_Update template-template parameter is parametrized by E_Access_Traits, while a tree's Node_Update template-template parameter is parametrized by Cmp_Fn. Tree-based containers store values in all nodes, while trie-based containers (at least in this implementation) store values in leafs. The graphic below shows the scheme, as well as some predefined policies (which are explained below).
A trie and its update policy A trie and its update policy
This library offers the following pre-defined trie node updating policies: trie_order_statistics_node_update supports order statistics. trie_prefix_search_node_update supports searching for ranges that match a given prefix. null_node_update is the null node updater.
Split and Join Trie-based containers support split and join methods; the rationale is equal to that of tree-based containers supporting these methods.
List
Interface The list-based container has the following declaration: template<typename Key, typename Mapped, typename Eq_Fn = std::equal_to<Key>, typename Update_Policy = move_to_front_lu_policy<>, typename Allocator = std::allocator<char> > class list_update; The parameters have the following meaning: Key is the key type. Mapped is the mapped-policy. Eq_Fn is a key equivalence functor. Update_Policy is a policy updating positions in the list based on access patterns. It is described in the following subsection. Allocator is an allocator type. A list-based associative container is a container that stores elements in a linked-list. It does not order the elements by any particular order related to the keys. List-based containers are primarily useful for creating "multimaps". In fact, list-based containers are designed in this library expressly for this purpose. List-based containers might also be useful for some rare cases, where a key is encapsulated to the extent that only key-equivalence can be tested. Hash-based containers need to know how to transform a key into a size type, and tree-based containers need to know if some key is larger than another. List-based associative containers, conversely, only need to know if two keys are equivalent. Since a list-based associative container does not order elements by keys, is it possible to order the list in some useful manner? Remarkably, many on-line competitive algorithms exist for reordering lists to reflect access prediction. (See and ).
Details
Underlying Data Structure The graphic below shows a simple list of integer keys. If we search for the integer 6, we are paying an overhead: the link with key 6 is only the fifth link; if it were the first link, it could be accessed faster.
A simple list A simple list
List-update algorithms reorder lists as elements are accessed. They try to determine, by the access history, which keys to move to the front of the list. Some of these algorithms require adding some metadata alongside each entry. For example, in the graphic below label A shows the counter algorithm. Each node contains both a key and a count metadata (shown in bold). When an element is accessed (e.g. 6) its count is incremented, as shown in label B. If the count reaches some predetermined value, say 10, as shown in label C, the count is set to 0 and the node is moved to the front of the list, as in label D.
The counter algorithm The counter algorithm
Policies this library allows instantiating lists with policies implementing any algorithm moving nodes to the front of the list (policies implementing algorithms interchanging nodes are unsupported). Associative containers based on lists are parametrized by a Update_Policy parameter. This parameter defines the type of metadata each node contains, how to create the metadata, and how to decide, using this metadata, whether to move a node to the front of the list. A list-based associative container object derives (publicly) from its update policy. An instantiation of Update_Policy must define internally update_metadata as the metadata it requires. Internally, each node of the list contains, besides the usual key and data, an instance of typename Update_Policy::update_metadata. An instantiation of Update_Policy must define internally two operators: update_metadata operator()(); bool operator()(update_metadata &); The first is called by the container object, when creating a new node, to create the node's metadata. The second is called by the container object, when a node is accessed ( when a find operation's key is equivalent to the key of the node), to determine whether to move the node to the front of the list. The library contains two predefined implementations of list-update policies. The first is lu_counter_policy, which implements the counter algorithm described above. The second is lu_move_to_front_policy, which unconditionally move an accessed element to the front of the list. The latter type is very useful in this library, since there is no need to associate metadata with each element. (See
Use in Multimaps In this library, there are no equivalents for the standard's multimaps and multisets; instead one uses an associative container mapping primary keys to secondary keys. List-based containers are especially useful as associative containers for secondary keys. In fact, they are implemented here expressly for this purpose. To begin with, these containers use very little per-entry structure memory overhead, since they can be implemented as singly-linked lists. (Arrays use even lower per-entry memory overhead, but they are less flexible in moving around entries, and have weaker invalidation guarantees). More importantly, though, list-based containers use very little per-container memory overhead. The memory overhead of an empty list-based container is practically that of a pointer. This is important for when they are used as secondary associative-containers in situations where the average ratio of secondary keys to primary keys is low (or even 1). In order to reduce the per-container memory overhead as much as possible, they are implemented as closely as possible to singly-linked lists. List-based containers do not store internally the number of values that they hold. This means that their size method has linear complexity (just like std::list). Note that finding the number of equivalent-key values in a standard multimap also has linear complexity (because it must be done, via std::distance of the multimap's equal_range method), but usually with higher constants. Most associative-container objects each hold a policy object (a hash-based container object holds a hash functor). List-based containers, conversely, only have class-wide policy objects.
Priority Queue
Interface The priority queue container has the following declaration: template<typename Value_Type, typename Cmp_Fn = std::less<Value_Type>, typename Tag = pairing_heap_tag, typename Allocator = std::allocator<char > > class priority_queue; The parameters have the following meaning: Value_Type is the value type. Cmp_Fn is a value comparison functor Tag specifies which underlying data structure to use. Allocator is an allocator type. The Tag parameter specifies which underlying data structure to use. Instantiating it bypairing_heap_tag,binary_heap_tag, binomial_heap_tag, rc_binomial_heap_tag, or thin_heap_tag, specifies, respectively, an underlying pairing heap (), binary heap (), binomial heap (), a binomial heap with a redundant binary counter (), or a thin heap (). As mentioned in the tutorial, __gnu_pbds::priority_queue shares most of the same interface with std::priority_queue. E.g. if q is a priority queue of type Q, then q.top() will return the "largest" value in the container (according to typename Q::cmp_fn). __gnu_pbds::priority_queue has a larger (and very slightly different) interface than std::priority_queue, however, since typically push and pop are deemed insufficient for manipulating priority-queues. Different settings require different priority-queue implementations which are described in later; see traits discusses ways to differentiate between the different traits of different implementations.
Details
Iterators There are many different underlying-data structures for implementing priority queues. Unfortunately, most such structures are oriented towards making push and top efficient, and consequently don't allow efficient access of other elements: for instance, they cannot support an efficient find method. In the use case where it is important to both access and "do something with" an arbitrary value, one would be out of luck. For example, many graph algorithms require modifying a value (typically increasing it in the sense of the priority queue's comparison functor). In order to access and manipulate an arbitrary value in a priority queue, one needs to reference the internals of the priority queue from some form of an associative container - this is unavoidable. Of course, in order to maintain the encapsulation of the priority queue, this needs to be done in a way that minimizes exposure to implementation internals. In this library the priority queue's insert method returns an iterator, which if valid can be used for subsequent modify and erase operations. This both preserves the priority queue's encapsulation, and allows accessing arbitrary values (since the returned iterators from the push operation can be stored in some form of associative container). Priority queues' iterators present a problem regarding their invalidation guarantees. One assumes that calling operator++ on an iterator will associate it with the "next" value. Priority-queues are self-organizing: each operation changes what the "next" value means. Consequently, it does not make sense that push will return an iterator that can be incremented - this can have no possible use. Also, as in the case of hash-based containers, it is awkward to define if a subsequent push operation invalidates a prior returned iterator: it invalidates it in the sense that its "next" value is not related to what it previously considered to be its "next" value. However, it might not invalidate it, in the sense that it can be de-referenced and used for modify and erase operations. Similarly to the case of the other unordered associative containers, this library uses a distinction between point-type and range type iterators. A priority queue's iterator can always be converted to a point_iterator, and a const_iterator can always be converted to a point_const_iterator. The following snippet demonstrates manipulating an arbitrary value: // A priority queue of integers. priority_queue<int > p; // Insert some values into the priority queue. priority_queue<int >::point_iterator it = p.push(0); p.push(1); p.push(2); // Now modify a value. p.modify(it, 3); assert(p.top() == 3); It should be noted that an alternative design could embed an associative container in a priority queue. Could, but most probably should not. To begin with, it should be noted that one could always encapsulate a priority queue and an associative container mapping values to priority queue iterators with no performance loss. One cannot, however, "un-encapsulate" a priority queue embedding an associative container, which might lead to performance loss. Assume, that one needs to associate each value with some data unrelated to priority queues. Then using this library's design, one could use an associative container mapping each value to a pair consisting of this data and a priority queue's iterator. Using the embedded method would need to use two associative containers. Similar problems might arise in cases where a value can reside simultaneously in many priority queues.
Underlying Data Structure There are three main implementations of priority queues: the first employs a binary heap, typically one which uses a sequence; the second uses a tree (or forest of trees), which is typically less structured than an associative container's tree; the third simply uses an associative container. These are shown in the graphic below, in labels A1 and A2, label B, and label C.
Underlying Priority-Queue Data-Structures. Underlying Priority-Queue Data-Structures.
Roughly speaking, any value that is both pushed and popped from a priority queue must incur a logarithmic expense (in the amortized sense). Any priority queue implementation that would avoid this, would violate known bounds on comparison-based sorting (see and ). Most implementations do not differ in the asymptotic amortized complexity of push and pop operations, but they differ in the constants involved, in the complexity of other operations (e.g., modify), and in the worst-case complexity of single operations. In general, the more "structured" an implementation (i.e., the more internal invariants it possesses) - the higher its amortized complexity of push and pop operations. This library implements different algorithms using a single class: priority_queue. Instantiating the Tag template parameter, "selects" the implementation: Instantiating Tag = binary_heap_tag creates a binary heap of the form in represented in the graphic with labels A1 or A2. The former is internally selected by priority_queue if Value_Type is instantiated by a primitive type (e.g., an int); the latter is internally selected for all other types (e.g., std::string). This implementations is relatively unstructured, and so has good push and pop performance; it is the "best-in-kind" for primitive types, e.g., ints. Conversely, it has high worst-case performance, and can support only linear-time modify and erase operations. Instantiating Tag = pairing_heap_tag creates a pairing heap of the form in represented by label B in the graphic above. This implementations too is relatively unstructured, and so has good push and pop performance; it is the "best-in-kind" for non-primitive types, e.g., std:strings. It also has very good worst-case push and join performance (O(1)), but has high worst-case pop complexity. Instantiating Tag = binomial_heap_tag creates a binomial heap of the form repsented by label B in the graphic above. This implementations is more structured than a pairing heap, and so has worse push and pop performance. Conversely, it has sub-linear worst-case bounds for pop, e.g., and so it might be preferred in cases where responsiveness is important. Instantiating Tag = rc_binomial_heap_tag creates a binomial heap of the form represented in label B above, accompanied by a redundant counter which governs the trees. This implementations is therefore more structured than a binomial heap, and so has worse push and pop performance. Conversely, it guarantees O(1) push complexity, and so it might be preferred in cases where the responsiveness of a binomial heap is insufficient. Instantiating Tag = thin_heap_tag creates a thin heap of the form represented by the label B in the graphic above. This implementations too is more structured than a pairing heap, and so has worse push and pop performance. Conversely, it has better worst-case and identical amortized complexities than a Fibonacci heap, and so might be more appropriate for some graph algorithms. Of course, one can use any order-preserving associative container as a priority queue, as in the graphic above label C, possibly by creating an adapter class over the associative container (much as std::priority_queue can adapt std::vector). This has the advantage that no cross-referencing is necessary at all; the priority queue itself is an associative container. Most associative containers are too structured to compete with priority queues in terms of push and pop performance.
Traits It would be nice if all priority queues could share exactly the same behavior regardless of implementation. Sadly, this is not possible. Just one for instance is in join operations: joining two binary heaps might throw an exception (not corrupt any of the heaps on which it operates), but joining two pairing heaps is exception free. Tags and traits are very useful for manipulating generic types. __gnu_pbds::priority_queue publicly defines container_category as one of the tags. Given any container Cntnr, the tag of the underlying data structure can be found via typename Cntnr::container_category; this is one of the possible tags shown in the graphic below.
Priority-Queue Data-Structure Tags. Priority-Queue Data-Structure Tags.
Additionally, a traits mechanism can be used to query a container type for its attributes. Given any container Cntnr, then __gnu_pbds::container_traits<Cntnr> is a traits class identifying the properties of the container. To find if a container might throw if two of its objects are joined, one can use container_traits<Cntnr>::split_join_can_throw Different priority-queue implementations have different invalidation guarantees. This is especially important, since there is no way to access an arbitrary value of priority queues except for iterators. Similarly to associative containers, one can use container_traits<Cntnr>::invalidation_guarantee to get the invalidation guarantee type of a priority queue. It is easy to understand from the graphic above, what container_traits<Cntnr>::invalidation_guarantee will be for different implementations. All implementations of type represented by label B have point_invalidation_guarantee: the container can freely internally reorganize the nodes - range-type iterators are invalidated, but point-type iterators are always valid. Implementations of type represented by labels A1 and A2 have basic_invalidation_guarantee: the container can freely internally reallocate the array - both point-type and range-type iterators might be invalidated. This has major implications, and constitutes a good reason to avoid using binary heaps. A binary heap can perform modify or erase efficiently given a valid point-type iterator. However, in order to supply it with a valid point-type iterator, one needs to iterate (linearly) over all values, then supply the relevant iterator (recall that a range-type iterator can always be converted to a point-type iterator). This means that if the number of modify or erase operations is non-negligible (say super-logarithmic in the total sequence of operations) - binary heaps will perform badly.
Acknowledgments Written by Ami Tavory and Vladimir Dreizin (IBM Haifa Research Laboratories), and Benjamin Kosnik (Red Hat). This library was partially written at IBM's Haifa Research Labs. It is based heavily on policy-based design and uses many useful techniques from Modern C++ Design: Generic Programming and Design Patterns Applied by Andrei Alexandrescu. Two ideas are borrowed from the SGI-STL implementation: The prime-based resize policies use a list of primes taken from the SGI-STL implementation. The red-black trees contain both a root node and a header node (containing metadata), connected in a way that forward and reverse iteration can be performed efficiently. Some test utilities borrow ideas from boost::timer. We would like to thank Scott Meyers for useful comments (without attributing to him any flaws in the design or implementation of the library). We would like to thank Matt Austern for the suggestion to include tries.