You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Tree initialization is revised to establish miri safety:
* miri tests on the prior versions failed due to the uninitialized data in the first offset elements of the vector backing the tree. Note that the implementation guarantees that these elements are never accessed.
* One potential solution to avoid uninitialized elements was to require `Default` or `PseudoDefault` for tree nodes. Most probably, this could have worked in most practical use cases. However, one exception is references. It might be considered a practical case where nodes are references, which would not implement neither Default or PseudoDefault.
* Since we require `Clone`, another solution would be to fill the offset elements with the first ever pushed node to the queue. In other words, tree initialization is delayed until the first element is pushed. The concern on this was its potential impact of the additional "is backing vector empty" check on push methods. However, benchmarks show that this check does no measurable impact on the performance.
* Since the latter solution does not add a trait requirement on keys or values, and since it does not degrade performance, it is implemented in this PR.
* With the updated & delayed tree initialization without `assume_init`, all miri tests pass.
* Minor fix in no-std configurations.
* Benchmark runs are repeated.
* Documentation is revised.
Copy file name to clipboardExpand all lines: README.md
+67-48Lines changed: 67 additions & 48 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,32 +16,32 @@ See [DecreaseKey](https://github.com/orxfun/orx-priority-queue/blob/main/docs/De
16
16
17
17
## B. d-ary Heap Implementations
18
18
19
-
Three categories of d-ary heap implementations are provided.
20
-
21
-
All the heap types have a constant generic parameter `D` which defines the maximum number of children of a node in the tree. Note that d-ary heap is a generalization of the binary heap for which d=2:
19
+
d-ary implementations are generalizations of the binary heap; i.e., binary heap is a special case where `D=2`. It is advantageous to have a parametrized d; as for instance, in the benchmarks defined here, `D=4` outperforms `D=2`.
22
20
* With a large d: number of per level comparisons increases while the tree depth becomes smaller.
23
-
* With a small d: each level requires fewer comparisons while the tree gets deeper.
21
+
* With a small d: each level requires fewer comparisons while the tree with the same number of nodes is deeper.
24
22
25
-
There is no dominating variant for all use cases. Binary heap is often the preferred choice due to its simplicity of implementation. However, the d-ary implementations in this crate, taking benefit of the **const generics**, provide a generalization, making it easy to switch between the variants. The motivation is to allow for tuning the heap to the algorithms and relevant input sets for performance critical methods.
23
+
Further, three categories of d-ary heap implementations are introduced.
26
24
27
-
### `DaryHeap`
25
+
### 1. DaryHeap (PriorityQueue)
28
26
29
-
This is the basic d-ary heap implementing `PriorityQueue<N, K>`. It is to be the default choice unless priority updates or decrease-key operations are required.
27
+
This is the basic d-ary heap implementing `PriorityQueue`. It is the default choice unless priority updates or decrease-key operations are required.
This is a d-ary heap paired up with a positions array and implements `PriorityQueueDecKey<N, K>`.
31
+
This is a d-ary heap paired up with a positions array and implements `PriorityQueueDecKey`.
34
32
35
33
* It requires the nodes to implement `HasIndex` trait which is nothing but `fn index(&self) -> usize`. Note that `usize`, `u64`, etc., already implements `HasIndex`.
36
-
* Further, it requires to know the maximum index that is expected to enter the queue (candidates coming from a closed set).
34
+
* Further, it requires to know the maximum index that is expected to enter the queue. In other words, candidates are expected to come from a closed set.
35
+
36
+
Once these conditions are satisfied, it **performs significantly faster** than the alternative decrease key queues.
37
37
38
-
Once these conditions are satisfied, it performs **significantly faster** than the alternative decrease key queues. Although the closed set requirement might sound strong, it is often naturally satisfied in mathematical algorithms. For instance, for most network traversal algorithms, the candidates set is the nodes of the graph, or indices in `0..num_nodes`.
38
+
Although the closed set requirement might sound strong, it is often naturally satisfied in mathematical algorithms. For instance, for most network traversal algorithms, the candidates set is the nodes of the graph, or indices in `0..num_nodes`. Similarly, if the heap is used to be used for sorting elements of a list, indices are simply coming from `0..list_len`.
39
39
40
40
This is the default decrease-key queue provided that the requirements are satisfied.
The table above summarizes the benchmark results of basic operations on basic queues, and queues allowing decrease key operations.
65
+
66
+
* In the first benchmark, we repeatedly call `push` and `pop` operations on a queue while maintaining an average length of 100000:
67
+
* We observe that `BinaryHeap` (`DaryHeap<_, _, 2>`) performs almost the same as the standard binary heap.
68
+
* Experiments on different values of d shows that `QuaternaryHeap` (D=4) outperforms both binary heaps.
69
+
* Further increasing D to 8 does not improve performance.
70
+
* Finally, we repeat the experiments with `BinaryHeap` and `QuaternaryHeap` using the specialized [`push_then_pop`](https://docs.rs/orx-priority-queue/latest/orx_priority_queue/trait.PriorityQueue.html#tymethod.push_then_pop) operation. Note that this operation further doubles the performance, and hence, should be used whenever it fits the use case.
71
+
* In the second benchmark, we add [`decrease_key_or_push`](https://docs.rs/orx-priority-queue/latest/orx_priority_queue/trait.PriorityQueueDecKey.html#method.decrease_key_or_push) calls to the operations. Standard binary heap is excluded since it cannot implement `PriorityQueueDecKey`.
72
+
* We observe that `DaryHeapOfIndices` significantly outperforms other decrease key queues.
73
+
* Among `BinaryHeapOfIndices` and `QuaternaryHeapOfIndices`, the latter with D=4 again performs better.
63
74
64
-
See [Benchmarks](https://github.com/orxfun/orx-priority-queue/blob/main/docs/Benchmarks.md) section to see the experiments and observations.
65
75
66
76
## C. Examples
67
77
68
78
### C.1. Basic Usage
69
79
80
+
Below example demonstrates basic usage of a simple `PriorityQueue`. You may see the entire functionalities [here](https://docs.rs/orx-priority-queue/latest/orx_priority_queue/trait.PriorityQueue.html).
As mentioned, `PriorityQueueDecKey` extends capabilities of a `PriorityQueue`. You may see the additional functionalities [here](https://docs.rs/orx-priority-queue/latest/orx_priority_queue/trait.PriorityQueueDecKey.html).
You may see below two implementations one using a `PriorityQueue` and the other with a `PriorityQueueDecKey`. Please note the following:
173
+
You may see below two implementations of the Dijkstra's shortest path algorithm: one using a `PriorityQueue` and the other with a `PriorityQueueDecKey`. Please note the following:
151
174
152
-
*`PriorityQueue` and `PriorityQueueDecKey`traits enable algorithm implementations for generic queue types. Therefore we are able to implement the shortest path algorithm once that works for any queue implementation. This allows to benchmark and tune specific queues for specific algorithms or input families.
153
-
* The second implementation with a decrease key queue pushes a great portion of complexity, or bookkeeping, to the queue and leads to a cleaner algorithm implementation.
175
+
*Priority queue traits allow us to be generic over queues. Therefore, we are able to implement the algorithm once that works for any queue implementation.
176
+
* The second implementation with a decrease key queue pushes some of the bookkeeping to the queue, and arguably leads to a cleaner algorithm implementation.
0 commit comments