Skip to content

Commit d0f8dba

Browse files
authored
Merge pull request #19 from orxfun/minor-revision-in-documentation
minor revision in documentation
2 parents 71cbb56 + 5538ecb commit d0f8dba

File tree

3 files changed

+29
-47
lines changed

3 files changed

+29
-47
lines changed

Cargo.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[package]
22
name = "orx-priority-queue"
3-
version = "1.2.0"
3+
version = "1.2.1"
44
edition = "2021"
55
authors = ["orxfun <[email protected]>"]
66
description = "Priority queue traits and high performance d-ary heap implementations."

README.md

Lines changed: 14 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -20,39 +20,31 @@ Three categories of d-ary heap implementations are provided.
2020

2121
All the heap types have a constant generic parameter `D` which defines the maximum number of children of a node in the tree. Note that d-ary heap is a generalization of the binary heap for which d=2:
2222
* With a large d: number of per level comparisons increases while the tree depth becomes smaller.
23-
* With a small d: each level required fewer comparisons while the tree gets deeper.
23+
* With a small d: each level requires fewer comparisons while the tree gets deeper.
2424

2525
There is no dominating variant for all use cases. Binary heap is often the preferred choice due to its simplicity of implementation. However, the d-ary implementations in this crate, taking benefit of the **const generics**, provide a generalization, making it easy to switch between the variants. The motivation is to allow for tuning the heap to the algorithms and relevant input sets for performance critical methods.
2626

27-
### `DaryHeap<N, K, const D: usize>`
27+
### `DaryHeap`
2828

2929
This is the basic d-ary heap implementing `PriorityQueue<N, K>`. It is to be the default choice unless priority updates or decrease-key operations are required.
3030

31-
### `DaryHeapOfIndices<N, K, const D>`
31+
### `DaryHeapOfIndices`
3232

3333
This is a d-ary heap paired up with a positions array and implements `PriorityQueueDecKey<N, K>`.
3434

3535
* It requires the nodes to implement `HasIndex` trait which is nothing but `fn index(&self) -> usize`. Note that `usize`, `u64`, etc., already implements `HasIndex`.
36-
* Further, it requires the maximum index that is expected to enter the queue (candidates coming from a closed set).
36+
* Further, it requires to know the maximum index that is expected to enter the queue (candidates coming from a closed set).
3737

38-
Once these conditions are satisfied, it performs **significantly faster** than the alternative decrease key queues. Although the closed set requirement might sound strong, it is often naturally satisfied in mathematical algorithms. For instance, for most network traversal algorithms, the candidates set is the nodes of the graph, or indices in `0..numNodes`.
38+
Once these conditions are satisfied, it performs **significantly faster** than the alternative decrease key queues. Although the closed set requirement might sound strong, it is often naturally satisfied in mathematical algorithms. For instance, for most network traversal algorithms, the candidates set is the nodes of the graph, or indices in `0..num_nodes`.
3939

4040
This is the default decrease-key queue provided that the requirements are satisfied.
4141

42-
### `DaryHeapWithMap<N, K, const D>`
42+
### `DaryHeapWithMap`
4343

4444
This is a d-ary heap paired up with a positions map (`HashMap` or `BTreeMap` when no-std) and implements `PriorityQueueDecKey<N, K>`.
4545

4646
This is the most general decrease-key queue that provides the open-set flexibility and fits to almost all cases.
4747

48-
The following two types additionally implement `PriorityQueueDecKey<N, K>` which serve different purposes:
49-
50-
* **`DaryHeapOfIndices<N, K, const D>`** is a d-ary heap paired up with a positions array. It requires the nodes to implement `HasIndex` trait which is nothing but `fn index(&self) -> usize`. Further, it requires the maximum index that is expected to enter the queue (candidates coming from a closed set). Once these conditions are satisfied, it performs **significantly faster** than the alternative decrease key queues.
51-
* Although the closed set requirement might sound strong, it is often satisfied in mathematical algorithms. For instance, for most network traversal algorithms, the candidates set is the nodes of the graph.
52-
* **`DaryHeapWithMap<N, K, const D: usize>`** is a d-ary heap paired up with a positions `HashMap` (`BTreeMap` with no-std). This provides the open-set flexibility and fits better to more general cases, rather than mathematical algorithms.
53-
54-
All three variants of the d-ary heap implementations take complete benefit of const generics to speed up traversal on the heap when d is a power of two.
55-
5648
### Other Queues
5749

5850
In addition, queue implementations are provided in this crate for the following external data structures:
@@ -65,9 +57,9 @@ This allows to use all the queue implementations interchangeably and measure per
6557

6658
### Performance & Benchmarks
6759

68-
In scenarios in tested "src/benches", `DaryHeap` performs:
69-
* comparable to, slightly faster than, `std::collections::BinaryHeap` for simple queue operations; and
70-
* significantly faster than queues implementing PriorityQueueDecKey for decrease key operations.
60+
In scenarios in tested "src/benches":
61+
* `DaryHeap` performs slightly faster than `std::collections::BinaryHeap` for simple queue operations; and
62+
* `DaryHeapOfIndices` performs significantly faster than queues implementing PriorityQueueDecKey for scenarios requiring decrease key operations.
7163

7264
See [Benchmarks](https://github.com/orxfun/orx-priority-queue/blob/main/docs/Benchmarks.md) section to see the experiments and observations.
7365

@@ -157,7 +149,7 @@ test_priority_queue_deckey(QuarternaryHeapOfIndices::with_index_bound(100));
157149

158150
You may see below two implementations one using a `PriorityQueue` and the other with a `PriorityQueueDecKey`. Please note the following:
159151

160-
* `PriorityQueue` and `PriorityQueueDecKey` traits enable algorithm implementations for generic queue types. Therefore we are able to implement the shortest path algorithm once that works for any specific queue implementation. This allows to benchmark and tune specific queues for specific algorithms or input families.
152+
* `PriorityQueue` and `PriorityQueueDecKey` traits enable algorithm implementations for generic queue types. Therefore we are able to implement the shortest path algorithm once that works for any queue implementation. This allows to benchmark and tune specific queues for specific algorithms or input families.
161153
* The second implementation with a decrease key queue pushes a great portion of complexity, or bookkeeping, to the queue and leads to a cleaner algorithm implementation.
162154

163155
```rust
@@ -208,8 +200,8 @@ fn dijkstras_with_basic_pq<Q: PriorityQueue<usize, Weight>>(
208200
continue;
209201
}
210202

211-
let mut out_edges = graph.out_edges(node);
212-
while let Some(Edge { head, weight }) = out_edges.next() {
203+
let out_edges = graph.out_edges(node);
204+
for Edge { head, weight } in out_edges {
213205
let next_cost = cost + weight;
214206
if next_cost < dist[*head] {
215207
queue.push(*head, next_cost);
@@ -243,8 +235,8 @@ fn dijkstras_with_deckey_pq<Q: PriorityQueueDecKey<usize, Weight>>(
243235
return Some(cost);
244236
}
245237

246-
let mut out_edges = graph.out_edges(node);
247-
while let Some(Edge { head, weight }) = out_edges.next() {
238+
let out_edges = graph.out_edges(node);
239+
for Edge { head, weight } in out_edges {
248240
if !visited[*head] {
249241
queue.try_decrease_key_or_push(&head, cost + weight);
250242
}
@@ -255,7 +247,6 @@ fn dijkstras_with_deckey_pq<Q: PriorityQueueDecKey<usize, Weight>>(
255247
None
256248
}
257249

258-
259250
// TESTS: basic priority queues
260251

261252
let e = |head: usize, weight: Weight| Edge { head, weight };

src/lib.rs

Lines changed: 14 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -20,39 +20,31 @@
2020
//!
2121
//! All the heap types have a constant generic parameter `D` which defines the maximum number of children of a node in the tree. Note that d-ary heap is a generalization of the binary heap for which d=2:
2222
//! * With a large d: number of per level comparisons increases while the tree depth becomes smaller.
23-
//! * With a small d: each level required fewer comparisons while the tree gets deeper.
23+
//! * With a small d: each level requires fewer comparisons while the tree gets deeper.
2424
//!
2525
//! There is no dominating variant for all use cases. Binary heap is often the preferred choice due to its simplicity of implementation. However, the d-ary implementations in this crate, taking benefit of the **const generics**, provide a generalization, making it easy to switch between the variants. The motivation is to allow for tuning the heap to the algorithms and relevant input sets for performance critical methods.
2626
//!
27-
//! ### `DaryHeap<N, K, const D: usize>`
27+
//! ### `DaryHeap`
2828
//!
2929
//! This is the basic d-ary heap implementing `PriorityQueue<N, K>`. It is to be the default choice unless priority updates or decrease-key operations are required.
3030
//!
31-
//! ### `DaryHeapOfIndices<N, K, const D>`
31+
//! ### `DaryHeapOfIndices`
3232
//!
3333
//! This is a d-ary heap paired up with a positions array and implements `PriorityQueueDecKey<N, K>`.
3434
//!
3535
//! * It requires the nodes to implement `HasIndex` trait which is nothing but `fn index(&self) -> usize`. Note that `usize`, `u64`, etc., already implements `HasIndex`.
36-
//! * Further, it requires the maximum index that is expected to enter the queue (candidates coming from a closed set).
36+
//! * Further, it requires to know the maximum index that is expected to enter the queue (candidates coming from a closed set).
3737
//!
38-
//! Once these conditions are satisfied, it performs **significantly faster** than the alternative decrease key queues. Although the closed set requirement might sound strong, it is often naturally satisfied in mathematical algorithms. For instance, for most network traversal algorithms, the candidates set is the nodes of the graph, or indices in `0..numNodes`.
38+
//! Once these conditions are satisfied, it performs **significantly faster** than the alternative decrease key queues. Although the closed set requirement might sound strong, it is often naturally satisfied in mathematical algorithms. For instance, for most network traversal algorithms, the candidates set is the nodes of the graph, or indices in `0..num_nodes`.
3939
//!
4040
//! This is the default decrease-key queue provided that the requirements are satisfied.
4141
//!
42-
//! ### `DaryHeapWithMap<N, K, const D>`
42+
//! ### `DaryHeapWithMap`
4343
//!
4444
//! This is a d-ary heap paired up with a positions map (`HashMap` or `BTreeMap` when no-std) and implements `PriorityQueueDecKey<N, K>`.
4545
//!
4646
//! This is the most general decrease-key queue that provides the open-set flexibility and fits to almost all cases.
4747
//!
48-
//! The following two types additionally implement `PriorityQueueDecKey<N, K>` which serve different purposes:
49-
//!
50-
//! * **`DaryHeapOfIndices<N, K, const D>`** is a d-ary heap paired up with a positions array. It requires the nodes to implement `HasIndex` trait which is nothing but `fn index(&self) -> usize`. Further, it requires the maximum index that is expected to enter the queue (candidates coming from a closed set). Once these conditions are satisfied, it performs **significantly faster** than the alternative decrease key queues.
51-
//! * Although the closed set requirement might sound strong, it is often satisfied in mathematical algorithms. For instance, for most network traversal algorithms, the candidates set is the nodes of the graph.
52-
//! * **`DaryHeapWithMap<N, K, const D: usize>`** is a d-ary heap paired up with a positions `HashMap` (`BTreeMap` with no-std). This provides the open-set flexibility and fits better to more general cases, rather than mathematical algorithms.
53-
//!
54-
//! All three variants of the d-ary heap implementations take complete benefit of const generics to speed up traversal on the heap when d is a power of two.
55-
//!
5648
//! ### Other Queues
5749
//!
5850
//! In addition, queue implementations are provided in this crate for the following external data structures:
@@ -65,9 +57,9 @@
6557
//!
6658
//! ### Performance & Benchmarks
6759
//!
68-
//! In scenarios in tested "src/benches", `DaryHeap` performs:
69-
//! * comparable to, slightly faster than, `std::collections::BinaryHeap` for simple queue operations; and
70-
//! * significantly faster than queues implementing PriorityQueueDecKey for decrease key operations.
60+
//! In scenarios in tested "src/benches":
61+
//! * `DaryHeap` performs slightly faster than `std::collections::BinaryHeap` for simple queue operations; and
62+
//! * `DaryHeapOfIndices` performs significantly faster than queues implementing PriorityQueueDecKey for scenarios requiring decrease key operations.
7163
//!
7264
//! See [Benchmarks](https://github.com/orxfun/orx-priority-queue/blob/main/docs/Benchmarks.md) section to see the experiments and observations.
7365
//!
@@ -157,7 +149,7 @@
157149
//!
158150
//! You may see below two implementations one using a `PriorityQueue` and the other with a `PriorityQueueDecKey`. Please note the following:
159151
//!
160-
//! * `PriorityQueue` and `PriorityQueueDecKey` traits enable algorithm implementations for generic queue types. Therefore we are able to implement the shortest path algorithm once that works for any specific queue implementation. This allows to benchmark and tune specific queues for specific algorithms or input families.
152+
//! * `PriorityQueue` and `PriorityQueueDecKey` traits enable algorithm implementations for generic queue types. Therefore we are able to implement the shortest path algorithm once that works for any queue implementation. This allows to benchmark and tune specific queues for specific algorithms or input families.
161153
//! * The second implementation with a decrease key queue pushes a great portion of complexity, or bookkeeping, to the queue and leads to a cleaner algorithm implementation.
162154
//!
163155
//! ```rust
@@ -208,8 +200,8 @@
208200
//! continue;
209201
//! }
210202
//!
211-
//! let mut out_edges = graph.out_edges(node);
212-
//! while let Some(Edge { head, weight }) = out_edges.next() {
203+
//! let out_edges = graph.out_edges(node);
204+
//! for Edge { head, weight } in out_edges {
213205
//! let next_cost = cost + weight;
214206
//! if next_cost < dist[*head] {
215207
//! queue.push(*head, next_cost);
@@ -243,8 +235,8 @@
243235
//! return Some(cost);
244236
//! }
245237
//!
246-
//! let mut out_edges = graph.out_edges(node);
247-
//! while let Some(Edge { head, weight }) = out_edges.next() {
238+
//! let out_edges = graph.out_edges(node);
239+
//! for Edge { head, weight } in out_edges {
248240
//! if !visited[*head] {
249241
//! queue.try_decrease_key_or_push(&head, cost + weight);
250242
//! }
@@ -255,7 +247,6 @@
255247
//! None
256248
//! }
257249
//!
258-
//!
259250
//! // TESTS: basic priority queues
260251
//!
261252
//! let e = |head: usize, weight: Weight| Edge { head, weight };

0 commit comments

Comments
 (0)