Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 12 additions & 2 deletions files/en-us/web/api/readablestream/tee/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,12 +17,22 @@ The **`tee()`** method of the
two-element array containing the two resulting branches as
new {{domxref("ReadableStream")}} instances.

This is useful for allowing two readers to read a stream simultaneously, perhaps at
different speeds. You might do this for example in a ServiceWorker if you want to fetch
This is useful for allowing two readers to read a stream sequentially or simultaneously,
perhaps at different speeds.
You might do this for example in a ServiceWorker if you want to fetch
a response from the server and stream it to the browser, but also stream it to the
ServiceWorker cache. Since a response body cannot be consumed more than once, you'd need
two copies to do this.

A teed stream will backpressure to the speed of the *faster* consumed `ReadableStream`,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Drive-by comment here.

Even if I know well how connections and messages do work (I was able to manually decode TCP messages back then), as a non-native reader, I find the term backpressure difficult to understand. I think we should find a better term, or at least succintly define it the first time we use it in a page.

Copy link
Contributor Author

@yonran yonran May 31, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair point. I realize that I was using backpressure as a verb like in the akka-streams documentation e.g. Source.alsoTo. But in nodejs stream and streams spec, backpressure is only used as a noun. Also the growable internal queue and support for both pull sources and push sources when implementing an underlying source make backpressure more complicated.

For a ReadableStream in the streams spec, backpressure is specifically the condition that controller.desiredSize <= 0, which a push underlying source is expected to check periodically and pause calling controller.enqueue(chunk), and the controller will stop repeatedly calling pull(controller) to the underlying source which might be a pull source.

Copy link
Contributor Author

@yonran yonran Jun 1, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@teoli2003 I have added a technical explanation of backpressure in ReadableStream.tee, and also I opened a PR to the streams spec so hopefully the spec will match my description whatwg/streams#1234.

I also added an informal description of backpressure to Response.clone.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is interesting. I hadn't got to my reading around Tees yet!

FYI Just to be a little bit pedantic because my head is in this at the moment for byte streams work ...

A push underlying source does use backpressure to change how it enqueues new data - it will always enqueue whatever data it gets. If it gets "backpressure" it is supposed to signal the thing that is providing the data (e.g. a socket) that it should pause or throttle supply. The difference is that not every supply has a mechanism for throttling or pausing.

As you say, for a pull source the controller manages the pull requests based on need (spec appears to say that it keeps requesting until bufffers are filled (desired size = 0) then stops calling it until empty and more data is requested. Though it looks like there are different strategies you can use there.

Copy link
Collaborator

@hamishwillee hamishwillee Jun 13, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PS Backpressure is actually not a bad term IMO, but as you say @teoli2003 it needs to be properly defined. I'd do that by linking to the section in the using streams guide (and make sure that is OK).

Backpressure is a bit filling a somewhat empty a pipe - at a certain rate the water starts pushing back up the pipe, and eventually you can't fill it any more.

and unread data is buffered onto the internal buffer
of the slower consumed `ReadableStream` without any limit or backpressure.
If only one branch is consumed, then the entire body will be buffered in memory.
Therefore, you should not use the build-in `tee()` to read very large streams
in parallel at different speeds.
Instead, search for an implementation that backpressures
to the speed of the *slower* consumed branch.

To cancel the stream you then need to cancel both resulting branches. Teeing a stream
will generally lock it for the duration, preventing other readers from locking it.

Expand Down
7 changes: 7 additions & 0 deletions files/en-us/web/api/request/clone/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,13 @@ browser-compat: api.Request.clone

The **`clone()`** method of the {{domxref("Request")}} interface creates a copy of the current `Request` object.

Like the underlying {{domxref("ReadableStream.tee")}} api,
the {{domxref("Request.body", "body")}} of a cloned `Request`
will backpressure to the speed of the *faster* consumed `ReadableStream`,
and unread data is buffered onto the internal buffer
of the slower consumed `ReadableStream` without any limit or backpressure.
Beware when you construct a `Request` from a stream and then `clone` it.

`clone()` throws a {{jsxref("TypeError")}} if the request body has already been used. In fact, the main reason `clone()` exists is to allow multiple uses of body objects (when they are one-use only.)

If you intend to modify the request, you may prefer the {{domxref("Request")}} constructor.
Expand Down
9 changes: 9 additions & 0 deletions files/en-us/web/api/response/clone/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,15 @@ browser-compat: api.Response.clone

The **`clone()`** method of the {{domxref("Response")}} interface creates a clone of a response object, identical in every way, but stored in a different variable.

Like the underlying {{domxref("ReadableStream.tee")}} api,
the {{domxref("Response.body", "body")}} of a cloned `Response`
will backpressure to the speed of the *faster* consumed `ReadableStream`,
and unread data is buffered onto the internal buffer
of the slower consumed `ReadableStream` without any limit or backpressure.
If only one branch is consumed, then the entire body will be buffered in memory.
Therefore, you should not use the build-in `clone()` to read very large bodies
in parallel at different speeds.

`clone()` throws a {{jsxref("TypeError")}} if the response body has already been used.
In fact, the main reason `clone()` exists is to allow multiple uses of body objects (when they are one-use only.)

Expand Down