Skip to content

Conversation

@sciascid
Copy link
Contributor

@sciascid sciascid commented Oct 30, 2025

The monitorStream function can block for a long time when creating and
installing a snapshot of a stream's state. This can lead to increased
tail latency.
This commit extends the RaftNode interface with a new InstallSnapshotAsync
method. This method performs snapshot writing and WAL compaction in a
in a separate goroutine, making the process non-blocking. The existing
InstallSnapshot method is now a synchronous wrapper around the new
asynchronous implementation.

Signed-off-by: Daniele Sciascia [email protected]

@sciascid
Copy link
Contributor Author

sciascid commented Oct 30, 2025

This pull request tries to reduce the (tail) latency of publishing to a stream.
Below are results of publishing to a single stream, on a 3 node cluster deployed on a single machine. The table shows throughput in msg/s and various latency measures (min, 50th percentile, 90th percentile and so on) in microseconds. In each experiment the client runs for 60 seconds.

Under low load, single client publishing one message at a time, latency varies between 61 microseconds, up to ~20 milliseconds for unlucky messages. Under high load, single client publishing 500 messages at a time, latency goes from ~1ms all the way to ~40ms.

batch throughput min p50 p90 p99 p99.9 max
1 10131 61 96 109 131 208 20364
500 284923 1003 1456 1969 13688 21280 39995

One cause for the high tail latency is due to periodic snapshotting of the stream. Taking a stream snapshot involves writing a file and sync it to disk, and doing it in a safe way requires more than one sync call (710407c).
During this time, the monitorStream goroutine is blocked and can't process incoming messages. The above patch "moves" the writing and syncing to disk, off the stream's main thread. By doing so tale latency improves significantly:

batch throughput min p50 p90 p99 p99.9 max
1 10238 58 95 109 130 213 7654
500 313891 1073 1484 1990 3434 4653 13250

In addition, under high load, there was ~10% improvement in throughput.

I used the "lats" benchmark client: https://github.com/synadia-labs/lats

@sciascid sciascid force-pushed the raft-two-step-snapshot branch 7 times, most recently from f55eda9 to a7e89d1 Compare November 3, 2025 14:50
@sciascid sciascid marked this pull request as ready for review November 3, 2025 14:52
@sciascid sciascid requested a review from a team as a code owner November 3, 2025 14:52
Copy link
Member

@MauriceVanVeen MauriceVanVeen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add a couple tests, for example to raft_test, to cover the use of InstallSnapshotAsync as well as when InstallSnapshot is called when already snapshotting?

// InstallSnapshot installs a snapshot for the current applied index. This is a
// synchronous call that will block until the snapshot is installed, and will
// release all of the log entries up to the applied index.
func (n *raft) InstallSnapshot(data []byte) error {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the performance change due to this now creating both a new ch for every call and spinning up a goroutine? Specifically for the paths that don't use async snapshots like meta and consumer.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question. I did it this way because the goroutine that is spawned acquires a lock on raft only after it wrote the snapshot file. My thinking was that we can we can get some of the benefits also when calling InstallSnapshot. And now that I think about it, it might work very nicely if combined with some of the optimizations in #7355.
I believe there is not too much overhead in spawning a goroutine and making the channel, compared to what the rest of the method is doing (create the snapshot file + a few syncs + compact). I could measure the overhead by changing doSnapshotAsync call with doSnapshot calls, and run the same benchmark and compare it to baseline.
We could also consider to use InstallSnapshotAsync with meta and consumers. I haven't done so because I don't have a way to benchmark those yet.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to evaluate the overhead as explained above. Basically, in monitorStream I replaced doSnapshotAsync with doSnapshot. I got the following results:
overhead.pdf

As expected, overhead is small and sometimes we do get some benefits in terms of latency. Alternatively, we could keep a goroutine ready to go, linked to a IPQueue.
Or if we want to err on the safe side, I could change the patch so that nothing changes in the case of regular InstallSnapshot

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Think it's fine for the scope of this PR. But maybe indeed we'll need to do some more profiling (also including the other perf improvements for 2.14) and see whether we should keep a goroutine ready to go and link to a IPQueue like you mention.

@sciascid
Copy link
Contributor Author

sciascid commented Nov 4, 2025

I was a little skeptical of the throughput improvement, and in fact I think that was partially due to the experiment running on a single machine: three servers saturate my cores, and no latency between the servers. So I did some experiments on EC2. Three servers deployed on eu-central-1, each one in different AZs. I repeated the experiments with a batched client and a pipelined client, with and without this optimization. In this setting, there still is a sharp reduction in tail latency, but only a slight increase (if any) in throughput.

batch.pdf
pipeline.pdf

@sciascid sciascid force-pushed the raft-two-step-snapshot branch 3 times, most recently from 2c3a58e to 16a5caa Compare November 5, 2025 15:06
Copy link
Member

@MauriceVanVeen MauriceVanVeen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Marked this PR for 2.14, assuming we want to wait and bundle this improvement with the other planned Raft improvements?

// InstallSnapshot installs a snapshot for the current applied index. This is a
// synchronous call that will block until the snapshot is installed, and will
// release all of the log entries up to the applied index.
func (n *raft) InstallSnapshot(data []byte) error {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Think it's fine for the scope of this PR. But maybe indeed we'll need to do some more profiling (also including the other perf improvements for 2.14) and see whether we should keep a goroutine ready to go and link to a IPQueue like you mention.

@MauriceVanVeen MauriceVanVeen changed the title Introduce asynchronous file writing for stream snapshots (2.14) Introduce asynchronous file writing for stream snapshots Nov 5, 2025
@sciascid sciascid force-pushed the raft-two-step-snapshot branch 2 times, most recently from 0562a89 to c6657a4 Compare November 5, 2025 15:21
// This shouldn't happen for streams like it can for pull
// consumers on idle streams but better to be safe than sorry!
ne, nb := n.Size()
if curState == lastState && ne < compactNumMin && nb < compactSizeMin {
Copy link
Member

@neilalexander neilalexander Nov 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering if this does the right thing, given that there's also the firstNeedsUpdate and lastNeedsUpdate bools. If either of those are true then the First and Last may be untrustworthy for this comparison. My feeling is that they shouldn't be set for a filtered state of _EMPTY_ but possible to double check?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to make sure that the optimization in this patch makes no changes in the logic of when to create a snapshot. And I also verified that under the same workload, this patch will roughly result in the same number of snapshot operations as the original code.
Having said that, I do think that checking the store's state is unnecessary. And I did play with changing these conditions a bit. However, I'd prefer to do this in a separate patch.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just elaborating my previous comment. The primary reason for taking snapshots of the stream is to trim raft's log. The logic that is implemented here involves the size and number of entries in the log, and this curState and lastState comparison. But this does not tell us anything about how much of raft's log we can get rid of. A better strategy would be the following: we compact if we can get rid of least one entire block. And we should not bother compacting partial blocks during normal operation.

@sciascid sciascid force-pushed the raft-two-step-snapshot branch from c6657a4 to 455a925 Compare November 14, 2025 10:35
The monitorStream function can block for a long time when creating and
installing a snapshot of a stream's state. This can lead to increased
tail latency.
This commit extends the RaftNode interface with a new InstallSnapshotAsync
method. This method performs snapshot writing and WAL compaction in a
in a separate goroutine, making the process non-blocking. The existing
InstallSnapshot method is now a synchronous wrapper around the new
asynchronous implementation.

Signed-off-by: Daniele Sciascia <[email protected]>
@sciascid sciascid force-pushed the raft-two-step-snapshot branch from 455a925 to 4d07c51 Compare November 14, 2025 10:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants