You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+14-23Lines changed: 14 additions & 23 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,39 +20,31 @@ Three categories of d-ary heap implementations are provided.
20
20
21
21
All the heap types have a constant generic parameter `D` which defines the maximum number of children of a node in the tree. Note that d-ary heap is a generalization of the binary heap for which d=2:
22
22
* With a large d: number of per level comparisons increases while the tree depth becomes smaller.
23
-
* With a small d: each level required fewer comparisons while the tree gets deeper.
23
+
* With a small d: each level requires fewer comparisons while the tree gets deeper.
24
24
25
25
There is no dominating variant for all use cases. Binary heap is often the preferred choice due to its simplicity of implementation. However, the d-ary implementations in this crate, taking benefit of the **const generics**, provide a generalization, making it easy to switch between the variants. The motivation is to allow for tuning the heap to the algorithms and relevant input sets for performance critical methods.
26
26
27
-
### `DaryHeap<N, K, const D: usize>`
27
+
### `DaryHeap`
28
28
29
29
This is the basic d-ary heap implementing `PriorityQueue<N, K>`. It is to be the default choice unless priority updates or decrease-key operations are required.
30
30
31
-
### `DaryHeapOfIndices<N, K, const D>`
31
+
### `DaryHeapOfIndices`
32
32
33
33
This is a d-ary heap paired up with a positions array and implements `PriorityQueueDecKey<N, K>`.
34
34
35
35
* It requires the nodes to implement `HasIndex` trait which is nothing but `fn index(&self) -> usize`. Note that `usize`, `u64`, etc., already implements `HasIndex`.
36
-
* Further, it requires the maximum index that is expected to enter the queue (candidates coming from a closed set).
36
+
* Further, it requires to know the maximum index that is expected to enter the queue (candidates coming from a closed set).
37
37
38
-
Once these conditions are satisfied, it performs **significantly faster** than the alternative decrease key queues. Although the closed set requirement might sound strong, it is often naturally satisfied in mathematical algorithms. For instance, for most network traversal algorithms, the candidates set is the nodes of the graph, or indices in `0..numNodes`.
38
+
Once these conditions are satisfied, it performs **significantly faster** than the alternative decrease key queues. Although the closed set requirement might sound strong, it is often naturally satisfied in mathematical algorithms. For instance, for most network traversal algorithms, the candidates set is the nodes of the graph, or indices in `0..num_nodes`.
39
39
40
40
This is the default decrease-key queue provided that the requirements are satisfied.
41
41
42
-
### `DaryHeapWithMap<N, K, const D>`
42
+
### `DaryHeapWithMap`
43
43
44
44
This is a d-ary heap paired up with a positions map (`HashMap` or `BTreeMap` when no-std) and implements `PriorityQueueDecKey<N, K>`.
45
45
46
46
This is the most general decrease-key queue that provides the open-set flexibility and fits to almost all cases.
47
47
48
-
The following two types additionally implement `PriorityQueueDecKey<N, K>` which serve different purposes:
49
-
50
-
***`DaryHeapOfIndices<N, K, const D>`** is a d-ary heap paired up with a positions array. It requires the nodes to implement `HasIndex` trait which is nothing but `fn index(&self) -> usize`. Further, it requires the maximum index that is expected to enter the queue (candidates coming from a closed set). Once these conditions are satisfied, it performs **significantly faster** than the alternative decrease key queues.
51
-
* Although the closed set requirement might sound strong, it is often satisfied in mathematical algorithms. For instance, for most network traversal algorithms, the candidates set is the nodes of the graph.
52
-
***`DaryHeapWithMap<N, K, const D: usize>`** is a d-ary heap paired up with a positions `HashMap` (`BTreeMap` with no-std). This provides the open-set flexibility and fits better to more general cases, rather than mathematical algorithms.
53
-
54
-
All three variants of the d-ary heap implementations take complete benefit of const generics to speed up traversal on the heap when d is a power of two.
55
-
56
48
### Other Queues
57
49
58
50
In addition, queue implementations are provided in this crate for the following external data structures:
@@ -65,9 +57,9 @@ This allows to use all the queue implementations interchangeably and measure per
65
57
66
58
### Performance & Benchmarks
67
59
68
-
In scenarios in tested "src/benches", `DaryHeap` performs:
69
-
*comparable to, slightly faster than,`std::collections::BinaryHeap` for simple queue operations; and
70
-
* significantly faster than queues implementing PriorityQueueDecKey for decrease key operations.
60
+
In scenarios in tested "src/benches":
61
+
*`DaryHeap` performs slightly faster than `std::collections::BinaryHeap` for simple queue operations; and
62
+
*`DaryHeapOfIndices` performs significantly faster than queues implementing PriorityQueueDecKey for scenarios requiring decrease key operations.
71
63
72
64
See [Benchmarks](https://github.com/orxfun/orx-priority-queue/blob/main/docs/Benchmarks.md) section to see the experiments and observations.
You may see below two implementations one using a `PriorityQueue` and the other with a `PriorityQueueDecKey`. Please note the following:
159
151
160
-
*`PriorityQueue` and `PriorityQueueDecKey` traits enable algorithm implementations for generic queue types. Therefore we are able to implement the shortest path algorithm once that works for any specific queue implementation. This allows to benchmark and tune specific queues for specific algorithms or input families.
152
+
*`PriorityQueue` and `PriorityQueueDecKey` traits enable algorithm implementations for generic queue types. Therefore we are able to implement the shortest path algorithm once that works for any queue implementation. This allows to benchmark and tune specific queues for specific algorithms or input families.
161
153
* The second implementation with a decrease key queue pushes a great portion of complexity, or bookkeeping, to the queue and leads to a cleaner algorithm implementation.
Copy file name to clipboardExpand all lines: src/lib.rs
+14-23Lines changed: 14 additions & 23 deletions
Original file line number
Diff line number
Diff line change
@@ -20,39 +20,31 @@
20
20
//!
21
21
//! All the heap types have a constant generic parameter `D` which defines the maximum number of children of a node in the tree. Note that d-ary heap is a generalization of the binary heap for which d=2:
22
22
//! * With a large d: number of per level comparisons increases while the tree depth becomes smaller.
23
-
//! * With a small d: each level required fewer comparisons while the tree gets deeper.
23
+
//! * With a small d: each level requires fewer comparisons while the tree gets deeper.
24
24
//!
25
25
//! There is no dominating variant for all use cases. Binary heap is often the preferred choice due to its simplicity of implementation. However, the d-ary implementations in this crate, taking benefit of the **const generics**, provide a generalization, making it easy to switch between the variants. The motivation is to allow for tuning the heap to the algorithms and relevant input sets for performance critical methods.
26
26
//!
27
-
//! ### `DaryHeap<N, K, const D: usize>`
27
+
//! ### `DaryHeap`
28
28
//!
29
29
//! This is the basic d-ary heap implementing `PriorityQueue<N, K>`. It is to be the default choice unless priority updates or decrease-key operations are required.
30
30
//!
31
-
//! ### `DaryHeapOfIndices<N, K, const D>`
31
+
//! ### `DaryHeapOfIndices`
32
32
//!
33
33
//! This is a d-ary heap paired up with a positions array and implements `PriorityQueueDecKey<N, K>`.
34
34
//!
35
35
//! * It requires the nodes to implement `HasIndex` trait which is nothing but `fn index(&self) -> usize`. Note that `usize`, `u64`, etc., already implements `HasIndex`.
36
-
//! * Further, it requires the maximum index that is expected to enter the queue (candidates coming from a closed set).
36
+
//! * Further, it requires to know the maximum index that is expected to enter the queue (candidates coming from a closed set).
37
37
//!
38
-
//! Once these conditions are satisfied, it performs **significantly faster** than the alternative decrease key queues. Although the closed set requirement might sound strong, it is often naturally satisfied in mathematical algorithms. For instance, for most network traversal algorithms, the candidates set is the nodes of the graph, or indices in `0..numNodes`.
38
+
//! Once these conditions are satisfied, it performs **significantly faster** than the alternative decrease key queues. Although the closed set requirement might sound strong, it is often naturally satisfied in mathematical algorithms. For instance, for most network traversal algorithms, the candidates set is the nodes of the graph, or indices in `0..num_nodes`.
39
39
//!
40
40
//! This is the default decrease-key queue provided that the requirements are satisfied.
41
41
//!
42
-
//! ### `DaryHeapWithMap<N, K, const D>`
42
+
//! ### `DaryHeapWithMap`
43
43
//!
44
44
//! This is a d-ary heap paired up with a positions map (`HashMap` or `BTreeMap` when no-std) and implements `PriorityQueueDecKey<N, K>`.
45
45
//!
46
46
//! This is the most general decrease-key queue that provides the open-set flexibility and fits to almost all cases.
47
47
//!
48
-
//! The following two types additionally implement `PriorityQueueDecKey<N, K>` which serve different purposes:
49
-
//!
50
-
//! * **`DaryHeapOfIndices<N, K, const D>`** is a d-ary heap paired up with a positions array. It requires the nodes to implement `HasIndex` trait which is nothing but `fn index(&self) -> usize`. Further, it requires the maximum index that is expected to enter the queue (candidates coming from a closed set). Once these conditions are satisfied, it performs **significantly faster** than the alternative decrease key queues.
51
-
//! * Although the closed set requirement might sound strong, it is often satisfied in mathematical algorithms. For instance, for most network traversal algorithms, the candidates set is the nodes of the graph.
52
-
//! * **`DaryHeapWithMap<N, K, const D: usize>`** is a d-ary heap paired up with a positions `HashMap` (`BTreeMap` with no-std). This provides the open-set flexibility and fits better to more general cases, rather than mathematical algorithms.
53
-
//!
54
-
//! All three variants of the d-ary heap implementations take complete benefit of const generics to speed up traversal on the heap when d is a power of two.
55
-
//!
56
48
//! ### Other Queues
57
49
//!
58
50
//! In addition, queue implementations are provided in this crate for the following external data structures:
@@ -65,9 +57,9 @@
65
57
//!
66
58
//! ### Performance & Benchmarks
67
59
//!
68
-
//! In scenarios in tested "src/benches", `DaryHeap` performs:
69
-
//! * comparable to, slightly faster than, `std::collections::BinaryHeap` for simple queue operations; and
70
-
//! * significantly faster than queues implementing PriorityQueueDecKey for decrease key operations.
60
+
//! In scenarios in tested "src/benches":
61
+
//! * `DaryHeap` performs slightly faster than `std::collections::BinaryHeap` for simple queue operations; and
62
+
//! * `DaryHeapOfIndices` performs significantly faster than queues implementing PriorityQueueDecKey for scenarios requiring decrease key operations.
71
63
//!
72
64
//! See [Benchmarks](https://github.com/orxfun/orx-priority-queue/blob/main/docs/Benchmarks.md) section to see the experiments and observations.
73
65
//!
@@ -157,7 +149,7 @@
157
149
//!
158
150
//! You may see below two implementations one using a `PriorityQueue` and the other with a `PriorityQueueDecKey`. Please note the following:
159
151
//!
160
-
//! * `PriorityQueue` and `PriorityQueueDecKey` traits enable algorithm implementations for generic queue types. Therefore we are able to implement the shortest path algorithm once that works for any specific queue implementation. This allows to benchmark and tune specific queues for specific algorithms or input families.
152
+
//! * `PriorityQueue` and `PriorityQueueDecKey` traits enable algorithm implementations for generic queue types. Therefore we are able to implement the shortest path algorithm once that works for any queue implementation. This allows to benchmark and tune specific queues for specific algorithms or input families.
161
153
//! * The second implementation with a decrease key queue pushes a great portion of complexity, or bookkeeping, to the queue and leads to a cleaner algorithm implementation.
162
154
//!
163
155
//! ```rust
@@ -208,8 +200,8 @@
208
200
//! continue;
209
201
//! }
210
202
//!
211
-
//! let mut out_edges = graph.out_edges(node);
212
-
//! while let Some(Edge { head, weight }) = out_edges.next() {
203
+
//! let out_edges = graph.out_edges(node);
204
+
//! for Edge { head, weight } in out_edges {
213
205
//! let next_cost = cost + weight;
214
206
//! if next_cost < dist[*head] {
215
207
//! queue.push(*head, next_cost);
@@ -243,8 +235,8 @@
243
235
//! return Some(cost);
244
236
//! }
245
237
//!
246
-
//! let mut out_edges = graph.out_edges(node);
247
-
//! while let Some(Edge { head, weight }) = out_edges.next() {
0 commit comments