-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
(2.14) Introduce asynchronous file writing for stream snapshots #7492
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
This pull request tries to reduce the (tail) latency of publishing to a stream. Under low load, single client publishing one message at a time, latency varies between 61 microseconds, up to ~20 milliseconds for unlucky messages. Under high load, single client publishing 500 messages at a time, latency goes from ~1ms all the way to ~40ms.
One cause for the high tail latency is due to periodic snapshotting of the stream. Taking a stream snapshot involves writing a file and sync it to disk, and doing it in a safe way requires more than one sync call (710407c).
In addition, under high load, there was ~10% improvement in throughput. I used the "lats" benchmark client: https://github.com/synadia-labs/lats |
f55eda9 to
a7e89d1
Compare
MauriceVanVeen
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add a couple tests, for example to raft_test, to cover the use of InstallSnapshotAsync as well as when InstallSnapshot is called when already snapshotting?
| // InstallSnapshot installs a snapshot for the current applied index. This is a | ||
| // synchronous call that will block until the snapshot is installed, and will | ||
| // release all of the log entries up to the applied index. | ||
| func (n *raft) InstallSnapshot(data []byte) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the performance change due to this now creating both a new ch for every call and spinning up a goroutine? Specifically for the paths that don't use async snapshots like meta and consumer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question. I did it this way because the goroutine that is spawned acquires a lock on raft only after it wrote the snapshot file. My thinking was that we can we can get some of the benefits also when calling InstallSnapshot. And now that I think about it, it might work very nicely if combined with some of the optimizations in #7355.
I believe there is not too much overhead in spawning a goroutine and making the channel, compared to what the rest of the method is doing (create the snapshot file + a few syncs + compact). I could measure the overhead by changing doSnapshotAsync call with doSnapshot calls, and run the same benchmark and compare it to baseline.
We could also consider to use InstallSnapshotAsync with meta and consumers. I haven't done so because I don't have a way to benchmark those yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to evaluate the overhead as explained above. Basically, in monitorStream I replaced doSnapshotAsync with doSnapshot. I got the following results:
overhead.pdf
As expected, overhead is small and sometimes we do get some benefits in terms of latency. Alternatively, we could keep a goroutine ready to go, linked to a IPQueue.
Or if we want to err on the safe side, I could change the patch so that nothing changes in the case of regular InstallSnapshot
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Think it's fine for the scope of this PR. But maybe indeed we'll need to do some more profiling (also including the other perf improvements for 2.14) and see whether we should keep a goroutine ready to go and link to a IPQueue like you mention.
|
I was a little skeptical of the throughput improvement, and in fact I think that was partially due to the experiment running on a single machine: three servers saturate my cores, and no latency between the servers. So I did some experiments on EC2. Three servers deployed on eu-central-1, each one in different AZs. I repeated the experiments with a batched client and a pipelined client, with and without this optimization. In this setting, there still is a sharp reduction in tail latency, but only a slight increase (if any) in throughput. |
2c3a58e to
16a5caa
Compare
MauriceVanVeen
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Marked this PR for 2.14, assuming we want to wait and bundle this improvement with the other planned Raft improvements?
| // InstallSnapshot installs a snapshot for the current applied index. This is a | ||
| // synchronous call that will block until the snapshot is installed, and will | ||
| // release all of the log entries up to the applied index. | ||
| func (n *raft) InstallSnapshot(data []byte) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Think it's fine for the scope of this PR. But maybe indeed we'll need to do some more profiling (also including the other perf improvements for 2.14) and see whether we should keep a goroutine ready to go and link to a IPQueue like you mention.
0562a89 to
c6657a4
Compare
| // This shouldn't happen for streams like it can for pull | ||
| // consumers on idle streams but better to be safe than sorry! | ||
| ne, nb := n.Size() | ||
| if curState == lastState && ne < compactNumMin && nb < compactSizeMin { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm wondering if this does the right thing, given that there's also the firstNeedsUpdate and lastNeedsUpdate bools. If either of those are true then the First and Last may be untrustworthy for this comparison. My feeling is that they shouldn't be set for a filtered state of _EMPTY_ but possible to double check?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to make sure that the optimization in this patch makes no changes in the logic of when to create a snapshot. And I also verified that under the same workload, this patch will roughly result in the same number of snapshot operations as the original code.
Having said that, I do think that checking the store's state is unnecessary. And I did play with changing these conditions a bit. However, I'd prefer to do this in a separate patch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just elaborating my previous comment. The primary reason for taking snapshots of the stream is to trim raft's log. The logic that is implemented here involves the size and number of entries in the log, and this curState and lastState comparison. But this does not tell us anything about how much of raft's log we can get rid of. A better strategy would be the following: we compact if we can get rid of least one entire block. And we should not bother compacting partial blocks during normal operation.
c6657a4 to
455a925
Compare
The monitorStream function can block for a long time when creating and installing a snapshot of a stream's state. This can lead to increased tail latency. This commit extends the RaftNode interface with a new InstallSnapshotAsync method. This method performs snapshot writing and WAL compaction in a in a separate goroutine, making the process non-blocking. The existing InstallSnapshot method is now a synchronous wrapper around the new asynchronous implementation. Signed-off-by: Daniele Sciascia <[email protected]>
455a925 to
4d07c51
Compare
The monitorStream function can block for a long time when creating and
installing a snapshot of a stream's state. This can lead to increased
tail latency.
This commit extends the RaftNode interface with a new InstallSnapshotAsync
method. This method performs snapshot writing and WAL compaction in a
in a separate goroutine, making the process non-blocking. The existing
InstallSnapshot method is now a synchronous wrapper around the new
asynchronous implementation.
Signed-off-by: Daniele Sciascia [email protected]