Skip to content

Commit

Permalink
Added node-deletion-delay-timeout and node-deletion-batcher-interval …
Browse files Browse the repository at this point in the history
…to FAQ.md and as chart example

Signed-off-by: Pierluigi Lenoci <[email protected]>
  • Loading branch information
pierluigilenoci committed Nov 6, 2024
1 parent 9ecf9b1 commit 82366b9
Show file tree
Hide file tree
Showing 3 changed files with 5 additions and 1 deletion.
2 changes: 1 addition & 1 deletion charts/cluster-autoscaler/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,4 +11,4 @@ name: cluster-autoscaler
sources:
- https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler
type: application
version: 9.43.2
version: 9.43.3
2 changes: 2 additions & 0 deletions charts/cluster-autoscaler/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -192,6 +192,8 @@ extraArgs:
# scale-down-delay-after-delete: 0s
# scale-down-delay-after-failure: 3m
# scale-down-unneeded-time: 10m
# node-deletion-delay-timeout: 2m
# node-deletion-batcher-interval: 0s
# skip-nodes-with-system-pods: true
# balancing-ignore-label_1: first-label-to-ignore
# balancing-ignore-label_2: second-label-to-ignore
Expand Down
2 changes: 2 additions & 0 deletions cluster-autoscaler/FAQ.md
Original file line number Diff line number Diff line change
Expand Up @@ -944,6 +944,8 @@ The following startup parameters are supported for cluster autoscaler:
| `scale-down-non-empty-candidates-count` | Maximum number of non empty nodes considered in one iteration as candidates for scale down with drain<br>Lower value means better CA responsiveness but possible slower scale down latency<br>Higher value can affect CA performance with big clusters (hundreds of nodes)<br>Set to non positive value to turn this heuristic off - CA will not limit the number of nodes it considers." | 30
| `scale-down-candidates-pool-ratio` | A ratio of nodes that are considered as additional non empty candidates for<br>scale down when some candidates from previous iteration are no longer valid<br>Lower value means better CA responsiveness but possible slower scale down latency<br>Higher value can affect CA performance with big clusters (hundreds of nodes)<br>Set to 1.0 to turn this heuristics off - CA will take all nodes as additional candidates. | 0.1
| `scale-down-candidates-pool-min-count` | Minimum number of nodes that are considered as additional non empty candidates<br>for scale down when some candidates from previous iteration are no longer valid.<br>When calculating the pool size for additional candidates we take<br>`max(#nodes * scale-down-candidates-pool-ratio, scale-down-candidates-pool-min-count)` | 50
| `node-deletion-delay-timeout` | Maximum time CA waits for removing delay-deletion.cluster-autoscaler.kubernetes.io/ annotations before deleting the node. | 2 minutes
| `node-deletion-batcher-interval` | How long CA ScaleDown gather nodes to delete them in batch. | 0 seconds
| `scan-interval` | How often cluster is reevaluated for scale up or down | 10 seconds
| `max-nodes-total` | Maximum number of nodes in all node groups. Cluster autoscaler will not grow the cluster beyond this number. | 0
| `cores-total` | Minimum and maximum number of cores in cluster, in the format \<min>:\<max>. Cluster autoscaler will not scale the cluster beyond these numbers. | 320000
Expand Down

0 comments on commit 82366b9

Please sign in to comment.