Skip to content

Unary gRPC support (core + http4s integration)#1896

Draft
denisrosca wants to merge 2 commits intodisneystreaming:series/0.18from
denisrosca:grpc-new
Draft

Unary gRPC support (core + http4s integration)#1896
denisrosca wants to merge 2 commits intodisneystreaming:series/0.18from
denisrosca:grpc-new

Conversation

@denisrosca
Copy link
Contributor

Purpose

  • Introduce unary gRPC support for smithy4s services using Alloy’s gRPC protocol modeling.
  • Provide a reusable, transport-agnostic gRPC protocol layer plus an http4s transport integration that follows gRPC semantics around framing, metadata, trailers, and structured errors.

What’s implemented (smithy4s + gRPC spec)

  • A transport-agnostic unary gRPC “protocol core”:
    • Unary framing/deframing with message size limits
    • Canonical handling of gRPC metadata headers/trailers, including binary metadata base64 rules
    • Structured error propagation that can carry machine-readable details alongside grpc-status/grpc-message, and client-side decoding back into modeled errors when possible (with configurable strictness)
  • A transport-specific http4s integration layer:
    • Server routing and request validation for unary gRPC over HTTP (method/content-type gating, optional HTTP/2 requirement, canonical method-path parsing incl. trailing slashes)
    • Client request construction and response decoding consistent with gRPC trailer semantics, including “trailers-only” responses as exposed by http4s/ember
    • Separation between gRPC protocol mechanics and http4s plumbing, so other transports can reuse the same core logic

Dependencies / open questions

  • This PR depends on yet-unreleased Alloy updates in Add grpc-status-details-bin carrier shapes and grpcStatus, grpcErrorMessage traits alloy#271 (gRPC-related modeling traits used to drive status codes and error message selection).
  • Carrier structures for rich error details (the protobuf envelope used in grpc-status-details-bin) are currently implemented in smithy4s. The definition/location of these carriers is still up for debate and can move between the Alloy and smithy4s changes.

TODO (gRPC spec compliance + grpc-java/grpc-go parity)

  • Implement streaming (client/server/bidi), including backpressure and flow control correctness
  • Implement real compression (gzip at least): negotiation and per-message compression bit support
  • Implement deadline + cancellation enforcement end-to-end (grpc-timeout, cancellations, status mapping)
  • Harden HTTP/2 mapping requirements and protocol error mapping (pseudo-headers, non-200 handling, missing trailers, intermediaries)
  • Expand rich error details handling (grpc-status-details-bin) beyond “first detail”, tighten strictness/precedence rules
  • Implement retry and hedging policies (service config parity)
  • Implement name resolution + baseline load-balancing (pick_first, round_robin) with connectivity state model
  • Implement connection lifecycle features: GOAWAY/drain behavior and keepalive/ping policies
  • Add configurable limits: max inbound/outbound message size and metadata size, with correct RESOURCE_EXHAUSTED behavior
  • Add interceptor/context propagation parity suitable for Scala FP (deadline/cancel/metadata/locals)
  • Add standard ecosystem services: reflection, health checking, channelz-style diagnostics
  • Add observability parity: structured metrics (attempts/retries), tracing propagation and spans

PR Checklist (not all items are relevant to all PRs)

  • Added unit-tests (for runtime code)
  • Added bootstrapped code + smoke tests (when the rendering logic is modified)
  • Added build-plugins integration tests (when reflection loading is required at codegen-time)
  • Added alloy compliance tests (when simpleRestJson protocol behaviour is expanded/updated)
  • Updated dynamic module to match generated-code behaviour
  • Added documentation
  • Updated changelog

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant