-
Notifications
You must be signed in to change notification settings - Fork 5.1k
Description
Title: Response backpressure for dynamic modules
Description:
Thanks for all the help with terminal dynamic modules - I am getting quite good results from using it for a Python ASGI server.
I think the only remaining feature that could help is response backpressure. I've mentioned it's not strictly required for most apps, but to be a generally production-ready server I guess it will need it.
To clarify what I mean, in ASGI, response body is sent by awaiting a coroutine
await send({"body": b"foo"})
In principle, this awaitable should complete after I/O completes or some approximation of that.
Currently I am completing the send immediately because there isn't any way of hooking up I/O. This means a loop like this can flood the connection with the python code executing the loop almost synchronously.
for _ in range(10000):
await send({"body": b"f"*1000})
Instead we'd like send to more or less complete when it is safe to send again, which for a flooded connection can mean flow control introducing delays.
A callback type of API in envoy might look like this
encodeData(body, endofstream, event_id)
Where the scheduler is invoked with the event id when I/O completes. This is probably a huge change.
It should be enough to expose the existing watermark callbacks to dynamic modules. Currently, a filter can set watermark callbacks - I think setting arbitrary callbacks in a C ABI is challenging and it's better to continue to use the _on type of pattern.
impl Filter {
on_watermark_event()
on_request_headers() {
ehf.enable_watermark_events() // Maybe doesn't need to be opt-in? I don't know if there's a perf impact
}
Let me know if it seems ok to add such an ABI or any other alternatives.
/cc @mathetake @wbpcode