Skip to content

Make forecast write load accurate when shard numbers change #129990

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

nicktindall
Copy link
Contributor

@nicktindall nicktindall commented Jun 25, 2025

Write load is calculated per shard, it is equal to
WRITE_LOAD(shard) = WRITE_TIME(shard) / UPTIME(shard)
To calculate the forecasted write load of the shards of a new index on DS rollover, we calculate the weighted average write load of all the shards of all the "recent" indices, e.g.
FORECAST_WRITE_LOAD(shard) = sum(WRITE_LOAD(shard)) / sum(UPTIME(shard)) | all shard in RECENT_INDEX
We use the forecasted write load quite heavily in balancing. When balancing, to calculate the write load for a node, we add the write loads of all the shards in all the allocated indices
FORECAST_WRITE_LOAD(node) = sum(FORECAST_WRITE_LOAD(shard)) | all shards on node
(see org.elasticsearch.cluster.routing.allocation.allocator.BalancedShardsAllocator.ModelNode#(addShard|removeShard))
And to calculate the write load for an index, we add the write loads of all the shards in the index
FORECAST_WRITE_LOAD(index) = sum(FORECAST_WRITE_LOAD(shard)) | all shards in index (taking into account replicas)
(see org.elasticsearch.cluster.routing.allocation.allocator.WeightFunction#getIndexWriteLoad)
This means when auto-sharding decides to increase the number of shards, the total write load in the cluster increases, because
e.g. an index that went from 3 shards to 6 shards goes from 3 * FORECAST_WRITE_LOAD(shard) to 6 * FORECAST_WRITE_LOAD(shard)
Conversely when auto-sharding decides to decrease the number of shards, the total write load in the cluster will decrease

Why is this a problem?

If we auto-shard down from 63 to 38 shards in a rollover, like we did the scale testing environment, the balancer will use the forecasted write-load that was calculated for 63 shards, but with only 38 shards. That load from 63 shards will be compressed into the 38 shards that replace them. So the balancer will under-estimate the write load for each shard by (1 - (38/36)) = ~40%. This will make it think it's in a desired state, but will leave nodes that have a greater proportion of the scaled-down (hence under-estimated) index's shards overloaded.

@nicktindall nicktindall added >bug :Distributed Coordination/Allocation All issues relating to the decision making around placing a shard (both master logic & on the nodes) labels Jun 25, 2025
@elasticsearchmachine elasticsearchmachine added the Team:Distributed Coordination Meta label for Distributed Coordination team label Jun 25, 2025
@elasticsearchmachine
Copy link
Collaborator

Pinging @elastic/es-distributed-coordination (Team:Distributed Coordination)

@@ -362,7 +362,7 @@ public String toString() {
* <p>If the recommendation is to INCREASE/DECREASE shards the reported cooldown period will be TimeValue.ZERO.
* If the auto sharding service thinks the number of shards must be changed but it can't recommend a change due to the cooldown
* period not lapsing, the result will be of type {@link AutoShardingType#COOLDOWN_PREVENTED_INCREASE} or
* {@link AutoShardingType#COOLDOWN_PREVENTED_INCREASE} with the remaining cooldown configured and the number of shards that should
* {@link AutoShardingType#COOLDOWN_PREVENTED_DECREASE} with the remaining cooldown configured and the number of shards that should
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed a typo too

@elasticsearchmachine
Copy link
Collaborator

Hi @nicktindall, I've created a changelog YAML for you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>bug :Distributed Coordination/Allocation All issues relating to the decision making around placing a shard (both master logic & on the nodes) Team:Distributed Coordination Meta label for Distributed Coordination team v9.1.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants