You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/algorithms/explanations/HSExp.md
+19-2Lines changed: 19 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -15,9 +15,26 @@ Heap is a special tree-based data structure. A binary tree is said to follow a h
15
15
* It is a complete binary tree.
16
16
* All nodes in the tree follow the property that they are greater than their children i.e. the largest element is at the root and both its children and smaller than the root and so on. Such a heap is called a max-heap. If instead, all nodes are smaller than their children, it is called a min-heap
17
17
18
+
## Sorting with the heap
19
+
20
+
Once the heap has been formed, sorting is straightforward. For a max-heap repeatedly swap the root (largest item) with the last item in the array, and reform the heap.
21
+
18
22
## Complexity
19
23
20
-
Space complexity is O(1) in all cases. Worst case and average case time
24
+
Time complexity:
25
+
```
26
+
Average Case O(n log n)
27
+
Worst Case O(n logn)
28
+
Best Case O(n)
29
+
```
30
+
Note: Best case, when all elements are equal is O(n), although many sources
31
+
list best case as O(n log n)
32
+
33
+
Space complexity is always O(1), that is, no extra space is needed.
34
+
35
+
[ Previous Background treatment of complexity: Space complexity is O(1) in all cases. Worst case and average case time
21
36
complexity is O(n log n). The best case time complexity is O(n), when
22
37
all elements are equal (despite many sources listing the best case as
Copy file name to clipboardExpand all lines: src/algorithms/explanations/QSExp.md
+45-7Lines changed: 45 additions & 7 deletions
Original file line number
Diff line number
Diff line change
@@ -5,15 +5,54 @@
5
5
Quicksort is a divide and conquer algorithm. It first rearranges the input
6
6
array into two smaller sub-arrays: the (relatively) low elements and the
7
7
(relatively) high elements. It then recursively sorts each of the sub-arrays.
8
-
The steps for Quicksort are:
9
8
10
-
* Pick the rightmost element of the array, called a pivot.
9
+
## Algorithm overview
11
10
12
-
* Partitioning: reorder the array so that all elements with values less than the pivot come before the pivot, while all elements with values greater than the pivot come after it. After this partitioning, the pivot is in its final position.
11
+
The steps for basic Quicksort are:
13
12
14
-
* Recursively apply the above steps to the sub-array of elements with smaller values and separately to the sub-array of elements with greater values.
13
+
* Pick the *pivot* element of the sub-array; here it is the rightmost
14
+
element.
15
15
16
-
The base case of the recursion is arrays of size one or zero, which are in order by definition, so they never need to be sorted.
16
+
* Partitioning: reorder the sub-array so that only elements with values less than or equal to the pivot come before the pivot, while only elements with values greater than or equal to the pivot come after it. After this partitioning, the pivot is in its final position.
17
+
18
+
* Recursively apply the above steps to the sub-array of elements before the pivot and separately to the sub-array of elements after the pivot.
19
+
20
+
The base case of the recursion is sub-arrays of size one or zero, which are in order by definition, so they never need to be sorted.
21
+
22
+
## Partitioning
23
+
24
+
The way partitioning is done here is to use two pointers/indices to
25
+
scan through the sub-array. One starts at the left and scans right
26
+
in search for "large" elements (greater than or equal to the pivot).
27
+
The other starts at the right and scans left in search for "small"
28
+
elements (less than or equal to the pivot). Whenever a large and a small
29
+
element are found they are swapped. When the two indices meet, the pivot
30
+
is swapped into that position and partitioning is complete.
31
+
32
+
33
+
## Time complexity
34
+
35
+
In the best case, partition divides the sub-array in half at each step,
36
+
resulting in <i>O(log n)</i> levels of recursion and <i>O(n log n)</i>
37
+
complexity overall. In the worst case, partition divides the sub-array
38
+
very unevenly at each step. The pivot element is either the largest or
39
+
smallest element in the sub-array and one of the resulting partitions
40
+
is always empty, resulting in <i>O(n<sup>2</sup>)</i> complexity.
41
+
This occurs if the input is sorted or reverse-sorted. Refinements such
42
+
as median of three partitioning (shown elsewhere) make the worst case
43
+
less likely. On average, partitioning is reasonably well balanced and
44
+
<i>O(n log n)</i> complexity results.
45
+
46
+
## Space complexity
47
+
48
+
Although there is no explicit additional space required, quicksort is
49
+
recursive, so it uses implicit stack space proportional to the depth of
50
+
recursion. The best and average cases are <i>O(log n)</i> but the worst
51
+
case is <i>O(n)</i>.
52
+
53
+
54
+
55
+
## Development of Quicksort
17
56
18
57
The first version of quicksort was published by Tony Hoare in 1961 and
19
58
quicksort remains the *fastest* sorting algorithm on average (subject to
@@ -22,5 +61,4 @@ done in *many* different ways and the choice of specific implementation
22
61
details and computer hardware can significantly affect the algorithm's
23
62
performance. In 1975, Robert Sedgewick completed a Ph.D. thesis on this
24
63
single algorithm. Our presentation here is influenced by the original
25
-
Hoare version and some of Sedgewick's adaptations.
26
-
64
+
Hoare version and some of Sedgewick's adaptations.
0 commit comments