You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What happened:
During leaderworkerset rolling update, the leader statefulset's .spec.updateStrategy.rollingUpdate.partition will be updated according to .spec.updateStrategy.rollingUpdate.maxUnavailable. Leader pods with an ordinal greater than or equal to the partition number will use the new revision while those with an ordinal less than the partition number will use the old revision. If there is a leader pod with an ordinal less than the partition number getting deleted for some reasons, the current behavior is that
All corresponding worker pods will get restarted
The lws controller will recreate the leader pod and the corresponding worker statefulset
The leader pod will use the old revision of the leader template following the partitioned rolling update behavior
Since old leader template may not be compatible with the new worker template, the new recreated group will likely crush and get restarted after startupProbe failed.
There can be multiple reasons why a leader pod with an ordinal less than the partition number can be deleted:
The service needs to do OS patch for the underlying hosts during the same deployment where the LWS is updated and needs an rolling update. The host patching will randomly update nodes and can restart the node hosting leader pods with ordinal less than the partition number.
Those nodes being used may hit some memory/disk usage condition and get restarted.
What you expected to happen:
When a lws group is restarted, I would expect the leader and worker pods using the templates from the same revision.
How to reproduce it (as minimally and precisely as possible):
Triggering an rolling update for the LWS
Find a leader pod with an ordinal less than the .spec.updateStrategy.rollingUpdate.partition from the leader statefulset and delete that leader pod
The restarted group with have leader pod using old leader template while worker pods uses new worker template
Anything else we need to know?:
Similar to the issue #225
Environment:
Kubernetes version (use kubectl version): 1.29
LWS version (use git describe --tags --dirty --always): 0.3.0
Cloud provider or hardware configuration: AWS EKS
OS (e.g: cat /etc/os-release): Amazon Linux
Kernel (e.g. uname -a): Linux ip-21-16-69-244.ec2.internal 5.10.224-212.876.amzn2.x86_64 Upload Leaderworkerset repository #1 SMP Thu Aug 22 16:55:24 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
Install tools:
Others:
The text was updated successfully, but these errors were encountered:
@xiaohongchen1991 we believe we have fixed this issue and all other issues related to rolling updates via #277.
The controller revision approach avoids depending on template hash to detect updates, and allows us to preserve the old template to re-create failed groups before they get their chance to update.
What happened:
During leaderworkerset rolling update, the leader statefulset's
.spec.updateStrategy.rollingUpdate.partition
will be updated according to.spec.updateStrategy.rollingUpdate.maxUnavailable
. Leader pods with an ordinal greater than or equal to the partition number will use the new revision while those with an ordinal less than the partition number will use the old revision. If there is a leader pod with an ordinal less than the partition number getting deleted for some reasons, the current behavior is thatSince old leader template may not be compatible with the new worker template, the new recreated group will likely crush and get restarted after startupProbe failed.
There can be multiple reasons why a leader pod with an ordinal less than the partition number can be deleted:
What you expected to happen:
When a lws group is restarted, I would expect the leader and worker pods using the templates from the same revision.
How to reproduce it (as minimally and precisely as possible):
.spec.updateStrategy.rollingUpdate.partition
from the leader statefulset and delete that leader podAnything else we need to know?:
Similar to the issue #225
Environment:
kubectl version
): 1.29git describe --tags --dirty --always
): 0.3.0cat /etc/os-release
): Amazon Linuxuname -a
): Linux ip-21-16-69-244.ec2.internal 5.10.224-212.876.amzn2.x86_64 Upload Leaderworkerset repository #1 SMP Thu Aug 22 16:55:24 UTC 2024 x86_64 x86_64 x86_64 GNU/LinuxThe text was updated successfully, but these errors were encountered: