Skip to content

Commit 1af831a

Browse files
committed
Ch. 17: Fix a lot of wording issues in the conclusion section
So many “however” sections! Etc.
1 parent 3b920cd commit 1af831a

File tree

2 files changed

+71
-71
lines changed

2 files changed

+71
-71
lines changed

nostarch/chapter17.md

Lines changed: 36 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -2850,9 +2850,9 @@ have threads!
28502850
The async model provides a different—and ultimately complementary—set of
28512851
tradeoffs. In the async model, concurrent operations don’t require their own
28522852
threads. Instead, they can run on tasks, as when we used `trpl::spawn_task` to
2853-
kick off work from a synchronous function throughout the streams section. A
2854-
*task* is similar to a threadbut instead of being managed by the operating
2855-
system, it’s managed by library-level code: the runtime.
2853+
kick off work from a synchronous function throughout the streams section. A task
2854+
is similar to a thread, but instead of being managed by the operating system,
2855+
it’s managed by library-level code: the runtime.
28562856
28572857
In the previous section, we saw that we could build a `Stream` by using an async
28582858
channel and spawning an async task which we could call from synchronous code. We
@@ -2894,41 +2894,41 @@ from the perspective of the calling code! What’s more, even though one of our
28942894
functions spawned an async task on the runtime and the other spawned an
28952895
OS thread, the resulting streams were unaffected by the differences.
28962896
2897-
However, there’s a significant difference between how these two approaches
2898-
behave, although we might have a hard time measuring it in this very simple
2899-
example. We could spawn hundreds of thousands or even millions of async tasks
2900-
on any modern personal computer. If we tried to do that with threads, we would
2901-
literally run out of memory!
2897+
Despite the similarities, these two approaches behave very differently, although
2898+
we might have a hard time measuring it in this very simple example. We could
2899+
spawn millions of async tasks on any modern personal computer. If we tried to do
2900+
that with threads, we would literally run out of memory!
29022901
29032902
However, there’s a reason these APIs are so similar. Threads act as a boundary
29042903
for sets of synchronous operations; concurrency is possible *between* threads.
29052904
Tasks act as a boundary for sets of *asynchronous* operations; concurrency is
2906-
possible both *between* and *within* tasks. In that regard, tasks are similar to
2907-
lightweight, runtime-managed threads with added capabilities that come from
2908-
being managed by a runtime instead of by the operating system. Futures are an
2909-
even more granular unit of concurrency, where each future may represent a tree
2910-
of other futures. That is, the runtime—specifically, its executor—manages tasks,
2911-
and tasks manage futures.
2912-
2913-
However, this doesn’t mean that async tasks are always better than threads, any
2914-
more than that threads are always better than tasks.
2915-
2916-
On the one hand, concurrency with threads is in some ways a simpler programming
2917-
model than concurrency with `async`. Threads are somewhat “fire and forget,”
2918-
they have no native equivalent to a future, so they simply run to completion,
2919-
without interruption except by the operating system itself. That is, they have
2920-
no *intra-task concurrency* the way futures can. Threads in Rust also have no
2921-
mechanisms for cancellation—a subject we haven’t covered in depth in this
2922-
chapter, but which is implicit in the fact that whenever we ended a future, its
2923-
state got cleaned up correctly.
2924-
2925-
These limitations make threads harder to compose than futures. It’s much more
2926-
difficult, for example, to build something similar to the `timeout` we built in
2927-
the “Building Our Own Async Abstractions” section of this chapter on page XX,
2928-
or the `throttle` method we used with streams in the “Composing Streams”
2929-
section of this chapter on page XX. The fact that futures are richer data
2930-
structures means they *can* be composed together more naturally, as we have
2931-
seen.
2905+
possible both *between* and *within* tasks, because a task can switch between
2906+
futures in its body. Finally, futures are Rust’s most granular unit of
2907+
concurrency, and each future may represent a tree of other futures. The
2908+
runtime—specifically, its executor—manages tasks, and tasks manage futures. In
2909+
that regard, tasks are similar to lightweight, runtime-managed threads with
2910+
added capabilities that come from being managed by a runtime instead of by the
2911+
operating system.
2912+
2913+
This doesn’t mean that async tasks are always better than threads, any more than
2914+
that threads are always better than tasks.
2915+
2916+
Concurrency with threads is in some ways a simpler programming model than
2917+
concurrency with `async`. That can be a strength or a weakness. Threads are
2918+
somewhat “fire and forget,” they have no native equivalent to a future, so they
2919+
simply run to completion, without interruption except by the operating system
2920+
itself. That is, they have no built-in support for *intra-task concurrency* the
2921+
way futures do. Threads in Rust also have no mechanisms for cancellation—a
2922+
subject we haven’t covered in depth in this chapter, but which is implicit in
2923+
the fact that whenever we ended a future, its state got cleaned up correctly.
2924+
2925+
These limitations also make threads harder to compose than futures. It’s much
2926+
more difficult, for example, to use threads to build helpers such as the
2927+
`timeout` we built in the “Building Our Own Async Abstractions” section of this
2928+
chapter on page XX or the `throttle` method we used with streams in the
2929+
“Composing Streams” section of this chapter on page XX. The fact that futures
2930+
are richer data structures means they can be composed together more naturally,
2931+
as we have seen.
29322932
29332933
Tasks then give *additional* control over futures, allowing you to choose where
29342934
and how to group the futures. And it turns out that threads and tasks often
@@ -2938,8 +2938,8 @@ hood the `Runtime` we have been using, including the `spawn_blocking` and
29382938
`spawn_task` functions, is multithreaded by default! Many runtimes use an
29392939
approach called *work stealing* to transparently move tasks around between
29402940
threads based on the current utilization of the threads, with the aim of
2941-
improving the overall performance of the system. To build, that actually
2942-
requires threads *and* tasks, and therefore futures.
2941+
improving the overall performance of the system. To build that actually requires
2942+
threads *and* tasks, and therefore futures.
29432943
29442944
As a default way of thinking about which to use when:
29452945

src/ch17-06-futures-tasks-threads.md

Lines changed: 35 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -18,9 +18,9 @@ have threads!
1818
The async model provides a different—and ultimately complementary—set of
1919
tradeoffs. In the async model, concurrent operations don’t require their own
2020
threads. Instead, they can run on tasks, as when we used `trpl::spawn_task` to
21-
kick off work from a synchronous function throughout the streams section. A
22-
*task* is similar to a threadbut instead of being managed by the operating
23-
system, it’s managed by library-level code: the runtime.
21+
kick off work from a synchronous function throughout the streams section. A task
22+
is similar to a thread, but instead of being managed by the operating system,
23+
it’s managed by library-level code: the runtime.
2424

2525
In the previous section, we saw that we could build a `Stream` by using an async
2626
channel and spawning an async task which we could call from synchronous code. We
@@ -42,40 +42,40 @@ from the perspective of the calling code! What’s more, even though one of our
4242
functions spawned an async task on the runtime and the other spawned an
4343
OS thread, the resulting streams were unaffected by the differences.
4444

45-
However, there’s a significant difference between how these two approaches
46-
behave, although we might have a hard time measuring it in this very simple
47-
example. We could spawn hundreds of thousands or even millions of async tasks
48-
on any modern personal computer. If we tried to do that with threads, we would
49-
literally run out of memory!
45+
Despite the similarities, these two approaches behave very differently, although
46+
we might have a hard time measuring it in this very simple example. We could
47+
spawn millions of async tasks on any modern personal computer. If we tried to do
48+
that with threads, we would literally run out of memory!
5049

5150
However, there’s a reason these APIs are so similar. Threads act as a boundary
5251
for sets of synchronous operations; concurrency is possible *between* threads.
5352
Tasks act as a boundary for sets of *asynchronous* operations; concurrency is
54-
possible both *between* and *within* tasks. In that regard, tasks are similar to
55-
lightweight, runtime-managed threads with added capabilities that come from
56-
being managed by a runtime instead of by the operating system. Futures are an
57-
even more granular unit of concurrency, where each future may represent a tree
58-
of other futures. That is, the runtime—specifically, its executor—manages tasks,
59-
and tasks manage futures.
60-
61-
However, this doesn’t mean that async tasks are always better than threads, any
62-
more than that threads are always better than tasks.
63-
64-
On the one hand, concurrency with threads is in some ways a simpler programming
65-
model than concurrency with `async`. Threads are somewhat “fire and forget,”
66-
they have no native equivalent to a future, so they simply run to completion,
67-
without interruption except by the operating system itself. That is, they have
68-
no *intra-task concurrency* the way futures can. Threads in Rust also have no
69-
mechanisms for cancellation—a subject we haven’t covered in depth in this
70-
chapter, but which is implicit in the fact that whenever we ended a future, its
71-
state got cleaned up correctly.
72-
73-
These limitations make threads harder to compose than futures. It’s much more
74-
difficult, for example, to build something similar to the `timeout` we built in
75-
[“Building Our Own Async Abstractions”][combining-futures], or the `throttle`
76-
method we used with streams in [“Composing Streams”][streams]. The fact that
77-
futures are richer data structures means they *can* be composed together more
78-
naturally, as we have seen.
53+
possible both *between* and *within* tasks, because a task can switch between
54+
futures in its body. Finally, futures are Rust’s most granular unit of
55+
concurrency, and each future may represent a tree of other futures. The
56+
runtime—specifically, its executor—manages tasks, and tasks manage futures. In
57+
that regard, tasks are similar to lightweight, runtime-managed threads with
58+
added capabilities that come from being managed by a runtime instead of by the
59+
operating system.
60+
61+
This doesn’t mean that async tasks are always better than threads, any more than
62+
that threads are always better than tasks.
63+
64+
Concurrency with threads is in some ways a simpler programming model than
65+
concurrency with `async`. That can be a strength or a weakness. Threads are
66+
somewhat “fire and forget,” they have no native equivalent to a future, so they
67+
simply run to completion, without interruption except by the operating system
68+
itself. That is, they have no built-in support for *intra-task concurrency* the
69+
way futures do. Threads in Rust also have no mechanisms for cancellation—a
70+
subject we haven’t covered in depth in this chapter, but which is implicit in
71+
the fact that whenever we ended a future, its state got cleaned up correctly.
72+
73+
These limitations also make threads harder to compose than futures. It’s much
74+
more difficult, for example, to use threads to build helpers such as the
75+
`timeout` we built in [“Building Our Own Async Abstractions”][combining-futures]
76+
or the `throttle` method we used with streams in [“Composing Streams”][streams].
77+
The fact that futures are richer data structures means they can be composed
78+
together more naturally, as we have seen.
7979

8080
Tasks then give *additional* control over futures, allowing you to choose where
8181
and how to group the futures. And it turns out that threads and tasks often
@@ -85,8 +85,8 @@ hood the `Runtime` we have been using, including the `spawn_blocking` and
8585
`spawn_task` functions, is multithreaded by default! Many runtimes use an
8686
approach called *work stealing* to transparently move tasks around between
8787
threads based on the current utilization of the threads, with the aim of
88-
improving the overall performance of the system. To build, that actually
89-
requires threads *and* tasks, and therefore futures.
88+
improving the overall performance of the system. To build that actually requires
89+
threads *and* tasks, and therefore futures.
9090

9191
As a default way of thinking about which to use when:
9292

0 commit comments

Comments
 (0)