Skip to content

Commit d2c91ea

Browse files
authored
Merge pull request #20 from orxfun/miri-safety
miri safety
2 parents d0f8dba + 5336535 commit d2c91ea

File tree

16 files changed

+264
-251
lines changed

16 files changed

+264
-251
lines changed

Cargo.toml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[package]
22
name = "orx-priority-queue"
3-
version = "1.2.1"
3+
version = "1.3.0"
44
edition = "2021"
55
authors = ["orxfun <[email protected]>"]
66
description = "Priority queue traits and high performance d-ary heap implementations."
@@ -19,7 +19,7 @@ priority-queue = { version = "2.0", optional = true }
1919

2020

2121
[[bench]]
22-
name = "push_then_pop"
22+
name = "basic_queue"
2323
harness = false
2424

2525
[dev-dependencies]

README.md

Lines changed: 67 additions & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -16,32 +16,32 @@ See [DecreaseKey](https://github.com/orxfun/orx-priority-queue/blob/main/docs/De
1616

1717
## B. d-ary Heap Implementations
1818

19-
Three categories of d-ary heap implementations are provided.
20-
21-
All the heap types have a constant generic parameter `D` which defines the maximum number of children of a node in the tree. Note that d-ary heap is a generalization of the binary heap for which d=2:
19+
d-ary implementations are generalizations of the binary heap; i.e., binary heap is a special case where `D=2`. It is advantageous to have a parametrized d; as for instance, in the benchmarks defined here, `D=4` outperforms `D=2`.
2220
* With a large d: number of per level comparisons increases while the tree depth becomes smaller.
23-
* With a small d: each level requires fewer comparisons while the tree gets deeper.
21+
* With a small d: each level requires fewer comparisons while the tree with the same number of nodes is deeper.
2422

25-
There is no dominating variant for all use cases. Binary heap is often the preferred choice due to its simplicity of implementation. However, the d-ary implementations in this crate, taking benefit of the **const generics**, provide a generalization, making it easy to switch between the variants. The motivation is to allow for tuning the heap to the algorithms and relevant input sets for performance critical methods.
23+
Further, three categories of d-ary heap implementations are introduced.
2624

27-
### `DaryHeap`
25+
### 1. DaryHeap (PriorityQueue)
2826

29-
This is the basic d-ary heap implementing `PriorityQueue<N, K>`. It is to be the default choice unless priority updates or decrease-key operations are required.
27+
This is the basic d-ary heap implementing `PriorityQueue`. It is the default choice unless priority updates or decrease-key operations are required.
3028

31-
### `DaryHeapOfIndices`
29+
### 2. DaryHeapOfIndices (PriorityQueue + PriorityQueueDecKey)
3230

33-
This is a d-ary heap paired up with a positions array and implements `PriorityQueueDecKey<N, K>`.
31+
This is a d-ary heap paired up with a positions array and implements `PriorityQueueDecKey`.
3432

3533
* It requires the nodes to implement `HasIndex` trait which is nothing but `fn index(&self) -> usize`. Note that `usize`, `u64`, etc., already implements `HasIndex`.
36-
* Further, it requires to know the maximum index that is expected to enter the queue (candidates coming from a closed set).
34+
* Further, it requires to know the maximum index that is expected to enter the queue. In other words, candidates are expected to come from a closed set.
35+
36+
Once these conditions are satisfied, it **performs significantly faster** than the alternative decrease key queues.
3737

38-
Once these conditions are satisfied, it performs **significantly faster** than the alternative decrease key queues. Although the closed set requirement might sound strong, it is often naturally satisfied in mathematical algorithms. For instance, for most network traversal algorithms, the candidates set is the nodes of the graph, or indices in `0..num_nodes`.
38+
Although the closed set requirement might sound strong, it is often naturally satisfied in mathematical algorithms. For instance, for most network traversal algorithms, the candidates set is the nodes of the graph, or indices in `0..num_nodes`. Similarly, if the heap is used to be used for sorting elements of a list, indices are simply coming from `0..list_len`.
3939

4040
This is the default decrease-key queue provided that the requirements are satisfied.
4141

42-
### `DaryHeapWithMap`
42+
### 3. DaryHeapWithMap (PriorityQueue + PriorityQueueDecKey)
4343

44-
This is a d-ary heap paired up with a positions map (`HashMap` or `BTreeMap` when no-std) and implements `PriorityQueueDecKey<N, K>`.
44+
This is a d-ary heap paired up with a positions map (`HashMap` or `BTreeMap` when no-std) and also implements `PriorityQueueDecKey`.
4545

4646
This is the most general decrease-key queue that provides the open-set flexibility and fits to almost all cases.
4747

@@ -53,20 +53,32 @@ In addition, queue implementations are provided in this crate for the following
5353
* `priority_queue:PriorityQueue<N, K>` implements both `PriorityQueue<N, K>` and `PriorityQueueDecKey<N, K>`
5454
* requires `--features impl_priority_queue`
5555

56-
This allows to use all the queue implementations interchangeably and measure performance.
56+
This allows to use all the queue implementations interchangeably and pick the one fitting best to the use case.
5757

5858
### Performance & Benchmarks
5959

60-
In scenarios in tested "src/benches":
61-
* `DaryHeap` performs slightly faster than `std::collections::BinaryHeap` for simple queue operations; and
62-
* `DaryHeapOfIndices` performs significantly faster than queues implementing PriorityQueueDecKey for scenarios requiring decrease key operations.
60+
*You may find the details of the benchmarks at [benches](https://github.com/orxfun/orx-priority-queue/blob/main/benches) folder.*
61+
62+
<img src="https://raw.githubusercontent.com/orxfun/orx-priority-queue/main/docs/bench_results.PNG" alt="https://raw.githubusercontent.com/orxfun/orx-priority-queue/main/docs/bench_results.PNG" />
63+
64+
The table above summarizes the benchmark results of basic operations on basic queues, and queues allowing decrease key operations.
65+
66+
* In the first benchmark, we repeatedly call `push` and `pop` operations on a queue while maintaining an average length of 100000:
67+
* We observe that `BinaryHeap` (`DaryHeap<_, _, 2>`) performs almost the same as the standard binary heap.
68+
* Experiments on different values of d shows that `QuaternaryHeap` (D=4) outperforms both binary heaps.
69+
* Further increasing D to 8 does not improve performance.
70+
* Finally, we repeat the experiments with `BinaryHeap` and `QuaternaryHeap` using the specialized [`push_then_pop`](https://docs.rs/orx-priority-queue/latest/orx_priority_queue/trait.PriorityQueue.html#tymethod.push_then_pop) operation. Note that this operation further doubles the performance, and hence, should be used whenever it fits the use case.
71+
* In the second benchmark, we add [`decrease_key_or_push`](https://docs.rs/orx-priority-queue/latest/orx_priority_queue/trait.PriorityQueueDecKey.html#method.decrease_key_or_push) calls to the operations. Standard binary heap is excluded since it cannot implement `PriorityQueueDecKey`.
72+
* We observe that `DaryHeapOfIndices` significantly outperforms other decrease key queues.
73+
* Among `BinaryHeapOfIndices` and `QuaternaryHeapOfIndices`, the latter with D=4 again performs better.
6374

64-
See [Benchmarks](https://github.com/orxfun/orx-priority-queue/blob/main/docs/Benchmarks.md) section to see the experiments and observations.
6575

6676
## C. Examples
6777

6878
### C.1. Basic Usage
6979

80+
Below example demonstrates basic usage of a simple `PriorityQueue`. You may see the entire functionalities [here](https://docs.rs/orx-priority-queue/latest/orx_priority_queue/trait.PriorityQueue.html).
81+
7082
```rust
7183
use orx_priority_queue::*;
7284

@@ -95,6 +107,24 @@ where
95107
}
96108
}
97109

110+
// d-ary heap generic over const d
111+
const D: usize = 4;
112+
113+
test_priority_queue(DaryHeap::<usize, f64, D>::default());
114+
test_priority_queue(DaryHeapWithMap::<usize, f64, D>::default());
115+
test_priority_queue(DaryHeapOfIndices::<usize, f64, D>::with_index_bound(100));
116+
117+
// type aliases for common heaps: Binary or Quarternary
118+
test_priority_queue(BinaryHeap::default());
119+
test_priority_queue(QuarternaryHeapWithMap::default());
120+
test_priority_queue(BinaryHeapOfIndices::with_index_bound(100));
121+
```
122+
123+
As mentioned, `PriorityQueueDecKey` extends capabilities of a `PriorityQueue`. You may see the additional functionalities [here](https://docs.rs/orx-priority-queue/latest/orx_priority_queue/trait.PriorityQueueDecKey.html).
124+
125+
```rust
126+
use orx_priority_queue::*;
127+
98128
// generic over decrease-key priority queues
99129
fn test_priority_queue_deckey<P>(mut pq: P)
100130
where
@@ -130,38 +160,27 @@ where
130160
// d-ary heap generic over const d
131161
const D: usize = 4;
132162

133-
test_priority_queue(DaryHeap::<usize, f64, D>::default());
134-
test_priority_queue(DaryHeapWithMap::<usize, f64, D>::default());
135-
test_priority_queue(DaryHeapOfIndices::<usize, f64, D>::with_index_bound(100));
136-
137-
test_priority_queue_deckey(DaryHeapWithMap::<usize, f64, D>::default());
138163
test_priority_queue_deckey(DaryHeapOfIndices::<usize, f64, D>::with_index_bound(100));
164+
test_priority_queue_deckey(DaryHeapWithMap::<usize, f64, D>::default());
139165

140-
// or type aliases for common heaps to simplify signature
141-
// Binary or Quarternary to fix d of d-ary
142-
test_priority_queue(BinaryHeap::default());
143-
test_priority_queue(BinaryHeapWithMap::default());
144-
test_priority_queue(BinaryHeapOfIndices::with_index_bound(100));
145-
test_priority_queue_deckey(QuarternaryHeapOfIndices::with_index_bound(100));
166+
// type aliases for common heaps: Binary or Quarternary
167+
test_priority_queue_deckey(BinaryHeapOfIndices::with_index_bound(100));
168+
test_priority_queue_deckey(QuarternaryHeapWithMap::default());
146169
```
147170

148171
### C.2. Usage in Dijkstra's Shortest Path
149172

150-
You may see below two implementations one using a `PriorityQueue` and the other with a `PriorityQueueDecKey`. Please note the following:
173+
You may see below two implementations of the Dijkstra's shortest path algorithm: one using a `PriorityQueue` and the other with a `PriorityQueueDecKey`. Please note the following:
151174

152-
* `PriorityQueue` and `PriorityQueueDecKey` traits enable algorithm implementations for generic queue types. Therefore we are able to implement the shortest path algorithm once that works for any queue implementation. This allows to benchmark and tune specific queues for specific algorithms or input families.
153-
* The second implementation with a decrease key queue pushes a great portion of complexity, or bookkeeping, to the queue and leads to a cleaner algorithm implementation.
175+
* Priority queue traits allow us to be generic over queues. Therefore, we are able to implement the algorithm once that works for any queue implementation.
176+
* The second implementation with a decrease key queue pushes some of the bookkeeping to the queue, and arguably leads to a cleaner algorithm implementation.
154177

155178
```rust
156179
use orx_priority_queue::*;
157180

158-
// Some additional types to set up the example
159-
160-
type Weight = u32;
161-
162181
pub struct Edge {
163182
head: usize,
164-
weight: Weight,
183+
weight: u32,
165184
}
166185

167186
pub struct Graph(Vec<Vec<Edge>>);
@@ -178,17 +197,15 @@ impl Graph {
178197

179198
// Implementation using a PriorityQueue
180199

181-
fn dijkstras_with_basic_pq<Q: PriorityQueue<usize, Weight>>(
200+
fn dijkstras_with_basic_pq<Q: PriorityQueue<usize, u32>>(
182201
graph: &Graph,
183202
queue: &mut Q,
184203
source: usize,
185204
sink: usize,
186-
) -> Option<Weight> {
187-
// reset
188-
queue.clear();
189-
let mut dist = vec![Weight::MAX; graph.num_nodes()];
190-
205+
) -> Option<u32> {
191206
// init
207+
queue.clear();
208+
let mut dist = vec![u32::MAX; graph.num_nodes()];
192209
dist[source] = 0;
193210
queue.push(source, 0);
194211

@@ -215,13 +232,13 @@ fn dijkstras_with_basic_pq<Q: PriorityQueue<usize, Weight>>(
215232

216233
// Implementation using a PriorityQueueDecKey
217234

218-
fn dijkstras_with_deckey_pq<Q: PriorityQueueDecKey<usize, Weight>>(
235+
fn dijkstras_with_deckey_pq<Q: PriorityQueueDecKey<usize, u32>>(
219236
graph: &Graph,
220237
queue: &mut Q,
221238
source: usize,
222239
sink: usize,
223-
) -> Option<Weight> {
224-
// reset
240+
) -> Option<u32> {
241+
// init
225242
queue.clear();
226243
let mut visited = vec![false; graph.num_nodes()];
227244

@@ -247,16 +264,18 @@ fn dijkstras_with_deckey_pq<Q: PriorityQueueDecKey<usize, Weight>>(
247264
None
248265
}
249266

250-
// TESTS: basic priority queues
267+
// example input
251268

252-
let e = |head: usize, weight: Weight| Edge { head, weight };
269+
let e = |head: usize, weight: u32| Edge { head, weight };
253270
let graph = Graph(vec![
254271
vec![e(1, 4), e(2, 5)],
255272
vec![e(0, 3), e(2, 6), e(3, 1)],
256273
vec![e(1, 3), e(3, 9)],
257274
vec![],
258275
]);
259276

277+
// TESTS: basic priority queues
278+
260279
let mut pq = BinaryHeap::new();
261280
assert_eq!(Some(5), dijkstras_with_basic_pq(&graph, &mut pq, 0, 3));
262281
assert_eq!(None, dijkstras_with_basic_pq(&graph, &mut pq, 3, 1));

benches/basic_queue.rs

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,7 @@ fn run_on_dary_heap<const D: usize>(
7272
data: &TestData,
7373
) {
7474
group.bench_with_input(
75-
BenchmarkId::new(format!("orx_priority_queue::DaryHeap<_, _, {}>", D), n),
75+
BenchmarkId::new(format!("DaryHeap<_, _, {}>", D), n),
7676
&n,
7777
|b, _| {
7878
b.iter(|| {
@@ -83,7 +83,7 @@ fn run_on_dary_heap<const D: usize>(
8383
);
8484
}
8585
fn bench_basic_queue(c: &mut Criterion) {
86-
let treatments = vec![1_000, 10_000, 100_000];
86+
let treatments = vec![100_000];
8787

8888
let mut group = c.benchmark_group("basic_queue");
8989

benches/deckey_queue.rs

Lines changed: 2 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -85,10 +85,7 @@ fn run_on_dary_heap_of_indices<const D: usize>(
8585
data: &TestData,
8686
) {
8787
group.bench_with_input(
88-
BenchmarkId::new(
89-
format!("orx_priority_queue::DaryHeapOfIndices<_, _, {}>", D),
90-
n,
91-
),
88+
BenchmarkId::new(format!("DaryHeapOfIndices<_, _, {}>", D), n),
9289
&n,
9390
|b, _| {
9491
b.iter(|| {
@@ -104,10 +101,7 @@ fn run_on_dary_heap_with_map<const D: usize>(
104101
data: &TestData,
105102
) {
106103
group.bench_with_input(
107-
BenchmarkId::new(
108-
format!("orx_priority_queue::DaryHeapWithMap<_, _, {}>", D),
109-
n,
110-
),
104+
BenchmarkId::new(format!("DaryHeapWithMap<_, _, {}>", D), n),
111105
&n,
112106
|b, _| {
113107
b.iter(|| {

benches/push_then_pop.rs

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ fn run_on_dary_heap<const D: usize>(
7474
data: &TestData,
7575
) {
7676
group.bench_with_input(
77-
BenchmarkId::new(format!("orx_priority_queue::DaryHeap<_, _, {}>", D), n),
77+
BenchmarkId::new(format!("DaryHeap<_, _, {}>", D), n),
7878
&n,
7979
|b, _| {
8080
b.iter(|| {
@@ -103,6 +103,7 @@ fn bench_push_then_pop(c: &mut Criterion) {
103103
},
104104
);
105105

106+
run_on_dary_heap::<2>(&mut group, *n, &data);
106107
run_on_dary_heap::<4>(&mut group, *n, &data);
107108

108109
#[cfg(feature = "impl_priority_queue")]
26.4 KB
Binary file not shown.

docs/Benchmarks.md

Lines changed: 0 additions & 41 deletions
This file was deleted.

0 commit comments

Comments
 (0)