@@ -2850,9 +2850,9 @@ have threads!
2850
2850
The async model provides a different—and ultimately complementary—set of
2851
2851
tradeoffs. In the async model, concurrent operations don’t require their own
2852
2852
threads. Instead, they can run on tasks, as when we used `trpl::spawn_task` to
2853
- kick off work from a synchronous function throughout the streams section. A
2854
- *task* is similar to a thread— but instead of being managed by the operating
2855
- system, it’s managed by library-level code: the runtime.
2853
+ kick off work from a synchronous function throughout the streams section. A task
2854
+ is similar to a thread, but instead of being managed by the operating system,
2855
+ it’s managed by library-level code: the runtime.
2856
2856
2857
2857
In the previous section, we saw that we could build a `Stream` by using an async
2858
2858
channel and spawning an async task which we could call from synchronous code. We
@@ -2894,41 +2894,41 @@ from the perspective of the calling code! What’s more, even though one of our
2894
2894
functions spawned an async task on the runtime and the other spawned an
2895
2895
OS thread, the resulting streams were unaffected by the differences.
2896
2896
2897
- However, there’s a significant difference between how these two approaches
2898
- behave, although we might have a hard time measuring it in this very simple
2899
- example. We could spawn hundreds of thousands or even millions of async tasks
2900
- on any modern personal computer. If we tried to do that with threads, we would
2901
- literally run out of memory!
2897
+ Despite the similarities, these two approaches behave very differently, although
2898
+ we might have a hard time measuring it in this very simple example. We could
2899
+ spawn millions of async tasks on any modern personal computer. If we tried to do
2900
+ that with threads, we would literally run out of memory!
2902
2901
2903
2902
However, there’s a reason these APIs are so similar. Threads act as a boundary
2904
2903
for sets of synchronous operations; concurrency is possible *between* threads.
2905
2904
Tasks act as a boundary for sets of *asynchronous* operations; concurrency is
2906
- possible both *between* and *within* tasks. In that regard, tasks are similar to
2907
- lightweight, runtime-managed threads with added capabilities that come from
2908
- being managed by a runtime instead of by the operating system. Futures are an
2909
- even more granular unit of concurrency, where each future may represent a tree
2910
- of other futures. That is, the runtime—specifically, its executor—manages tasks,
2911
- and tasks manage futures.
2912
-
2913
- However, this doesn’t mean that async tasks are always better than threads, any
2914
- more than that threads are always better than tasks.
2915
-
2916
- On the one hand, concurrency with threads is in some ways a simpler programming
2917
- model than concurrency with `async`. Threads are somewhat “fire and forget,”
2918
- they have no native equivalent to a future, so they simply run to completion,
2919
- without interruption except by the operating system itself. That is, they have
2920
- no *intra-task concurrency* the way futures can. Threads in Rust also have no
2921
- mechanisms for cancellation—a subject we haven’t covered in depth in this
2922
- chapter, but which is implicit in the fact that whenever we ended a future, its
2923
- state got cleaned up correctly.
2924
-
2925
- These limitations make threads harder to compose than futures. It’s much more
2926
- difficult, for example, to build something similar to the `timeout` we built in
2927
- the “Building Our Own Async Abstractions” section of this chapter on page XX,
2928
- or the `throttle` method we used with streams in the “Composing Streams”
2929
- section of this chapter on page XX. The fact that futures are richer data
2930
- structures means they *can* be composed together more naturally, as we have
2931
- seen.
2905
+ possible both *between* and *within* tasks, because a task can switch between
2906
+ futures in its body. Finally, futures are Rust’s most granular unit of
2907
+ concurrency, and each future may represent a tree of other futures. The
2908
+ runtime—specifically, its executor—manages tasks, and tasks manage futures. In
2909
+ that regard, tasks are similar to lightweight, runtime-managed threads with
2910
+ added capabilities that come from being managed by a runtime instead of by the
2911
+ operating system.
2912
+
2913
+ This doesn’t mean that async tasks are always better than threads, any more than
2914
+ that threads are always better than tasks.
2915
+
2916
+ Concurrency with threads is in some ways a simpler programming model than
2917
+ concurrency with `async`. That can be a strength or a weakness. Threads are
2918
+ somewhat “fire and forget,” they have no native equivalent to a future, so they
2919
+ simply run to completion, without interruption except by the operating system
2920
+ itself. That is, they have no built-in support for *intra-task concurrency* the
2921
+ way futures do. Threads in Rust also have no mechanisms for cancellation—a
2922
+ subject we haven’t covered in depth in this chapter, but which is implicit in
2923
+ the fact that whenever we ended a future, its state got cleaned up correctly.
2924
+
2925
+ These limitations also make threads harder to compose than futures. It’s much
2926
+ more difficult, for example, to use threads to build helpers such as the
2927
+ `timeout` we built in the “Building Our Own Async Abstractions” section of this
2928
+ chapter on page XX or the `throttle` method we used with streams in the
2929
+ “Composing Streams” section of this chapter on page XX. The fact that futures
2930
+ are richer data structures means they can be composed together more naturally,
2931
+ as we have seen.
2932
2932
2933
2933
Tasks then give *additional* control over futures, allowing you to choose where
2934
2934
and how to group the futures. And it turns out that threads and tasks often
@@ -2938,8 +2938,8 @@ hood the `Runtime` we have been using, including the `spawn_blocking` and
2938
2938
`spawn_task` functions, is multithreaded by default! Many runtimes use an
2939
2939
approach called *work stealing* to transparently move tasks around between
2940
2940
threads based on the current utilization of the threads, with the aim of
2941
- improving the overall performance of the system. To build, that actually
2942
- requires threads *and* tasks, and therefore futures.
2941
+ improving the overall performance of the system. To build that actually requires
2942
+ threads *and* tasks, and therefore futures.
2943
2943
2944
2944
As a default way of thinking about which to use when:
2945
2945
0 commit comments