-
Notifications
You must be signed in to change notification settings - Fork 209
Open
Description
For pulsatile, high-volume broadcast workloads, we can get mechanistic gains simply by treating these as batches end-to-end. This should be built as an opt-in behaviour and the application must be ready to handle it. That is, we should prefer dedicated APIs over magically attempting to batch under the hood (the tradeoff space is similar to Nagle / TCP_NO_DELAY).
The proposal is to implement a set of *Many
methods and interfaces that applications consume directly for the topics where this behaviour makes sense. (Like Ethereum attestations).
Forwarding path
- Consume what's available in the ingress queue at once by passing in a buffer to avoid reallocs (
NextMany(ctx context.Context, buffer []*Message) (uint, error)
). - Validate everything in batch (if the validator implements a
ValidatorMany([]*Message) ([]bool, error)
interface). - Forward all at once, chunking in MTU-sized RPCs (reconciling with
IDONTWANTs
from peers, i.e. dropping the attestations they don't want).
Publishing path
- Introduce similar mechanistic optimizations to the existing
PublishBatch
. In Ethereum, this would be invoked by a beacon node sending many attestations at once (e.g. many validators attesting on the same slot, no matter the committee because the topic is part of Message).
Metadata
Metadata
Assignees
Labels
No labels