feat(sse): wire @defer incremental delivery into Absinthe.Plug#2
Conversation
Enable @defer SSE transport in the main plug request path: - run_query: detect incremental_delivery flag + SSE Accept header, route to deliver_incremental_sse instead of standard JSON response - deliver_incremental_sse: split resolved result into initial (without deferred fields) and incremental payloads, send as SSE events (next/initial, next/incremental, complete) - call: handle {:ok, :streaming} result for already-sent SSE responses - Clear before_send callbacks (ETag etc.) before send_chunked to prevent crash on nil resp_body Queries without @defer continue to return standard JSON responses.
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 4 potential issues.
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
Reviewed by Cursor Bugbot for commit dd6f8ff. Configure here.
| |> Plug.Conn.put_resp_header("content-type", "text/event-stream") | ||
| |> Plug.Conn.put_resp_header("cache-control", "no-cache") | ||
| |> Plug.Conn.put_resp_header("x-accel-buffering", "no") | ||
| |> Map.update(:private, %{}, &Map.put(&1, :before_send, [])) |
There was a problem hiding this comment.
Clearing before_send on wrong conn field
High Severity
The code clears before_send inside conn.private, but Plug.Conn stores before_send callbacks as a top-level struct field at conn.before_send. The send_chunked/2 function calls run_before_send, which reads from conn.before_send, not conn.private.before_send. This means the ETag/before_send crash described in the comment is not actually prevented — the callbacks will still fire with resp_body: nil during send_chunked.
Reviewed by Cursor Bugbot for commit dd6f8ff. Configure here.
|
|
||
| # Send initial event | ||
| initial_event = EventFormatter.format_event("next", initial_response, 0) | ||
| {:ok, conn} = Plug.Conn.chunk(conn, initial_event) |
There was a problem hiding this comment.
Hard pattern match on chunk crashes on disconnect
Medium Severity
The initial chunk uses a hard pattern match {:ok, conn} = Plug.Conn.chunk(...) which raises MatchError if the client has disconnected and chunk returns {:error, reason}. Later chunk calls in the same function properly handle errors with case statements, making this inconsistency a crash risk under normal disconnect conditions.
Reviewed by Cursor Bugbot for commit dd6f8ff. Configure here.
| {conn, {:ok, :streaming}} | ||
| else | ||
| {conn, {:ok, bp.result}} | ||
| end |
There was a problem hiding this comment.
SSE triggers for wildcard Accept breaking standard clients
High Severity
accepts_sse? returns true for Accept: */*, which is the default header for most HTTP clients (curl, fetch, Postman, etc.). This means nearly all @defer queries through Absinthe.Plug will get SSE chunked responses instead of standard JSON, contradicting the PR description which states SSE is for Accept: text/event-stream specifically. Standard GraphQL clients expecting JSON will receive an unexpected text/event-stream response.
Reviewed by Cursor Bugbot for commit dd6f8ff. Configure here.
| conn | ||
| |> ConnectionManager.setup_sse_headers() | ||
| |> put_resp_header("content-type", "text/event-stream") | ||
| |> send_chunked(200) |
There was a problem hiding this comment.
Subscribe function drops critical SSE response headers
Medium Severity
Replacing ConnectionManager.setup_sse_headers() with only put_resp_header("content-type", "text/event-stream") drops five headers that setup_sse_headers previously set: cache-control: no-cache, connection: keep-alive, x-accel-buffering: no, and two CORS headers. Missing cache-control allows proxies to cache SSE events, and missing x-accel-buffering causes nginx to buffer the response, preventing real-time delivery of subscription events.
Reviewed by Cursor Bugbot for commit dd6f8ff. Configure here.
|
You have used all of your free Bugbot PR reviews. To receive reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial. |


Summary
Absinthe.Plugrequest path@defer+Accept: text/event-streamnow return chunked SSE events@defercontinue to return standard JSONWhat changed
lib/absinthe/plug.ex(115 insertions, 4 deletions)call/2— new case for{:ok, :streaming}(already-sent SSE response)run_query/4— detectsincremental_delivery: true+ SSE Accept header → routes todeliver_incremental_ssedeliver_incremental_sse/2(new) — splits resolved result into initial/incremental payloads usingdefer_infofrom streaming context, sends as SSE events:event: next— initial data (deferred fields stripped) +pending+hasNext: trueevent: next— incremental data (deferred fields) +hasNext: falseevent: completebefore_sendcallbacks beforesend_chunkedto prevent ETag plug crash on nilresp_bodyTest plan
@deferquery: initial event hasnameonly, incremental hasfieldsarray@defer: returns JSON (no streaming)Depends on
Note
Medium Risk
Adds chunked SSE response handling for
@deferqueries and tweaks subscription SSE headers/keep-alive behavior, which can affect HTTP response semantics and intermediaries (proxies/caches). Core GraphQL execution remains the same for non-incremental requests.Overview
Wires
@deferincremental delivery intoAbsinthe.Plug: when a pipeline run reportsincremental_delivery: trueand the requestAccepts SSE, the plug now switches to a chunkedtext/event-streamresponse and returns{:ok, :streaming}to avoid double-sending.Adds
deliver_incremental_sse/2to emit an initialnextevent with deferred fields stripped pluspending/hasNext, followed bynextevents for each deferred payload and a finalcompleteevent; it also clearsbefore_sendcallbacks beforesend_chunked/200to avoid issues with plugs like ETag.Subscription SSE setup is simplified to inline headers +
send_chunked/200, and the keep-alive comment payload changes from: keep-aliveto:ping.Reviewed by Cursor Bugbot for commit dd6f8ff. Bugbot is set up for automated code reviews on this repo. Configure here.