Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Leader and Worker pods use inconsistent revisions during rolling update #280

Closed
xiaohongchen1991 opened this issue Dec 12, 2024 · 4 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@xiaohongchen1991
Copy link

xiaohongchen1991 commented Dec 12, 2024

What happened:
During leaderworkerset rolling update, the leader statefulset's .spec.updateStrategy.rollingUpdate.partition will be updated according to .spec.updateStrategy.rollingUpdate.maxUnavailable. Leader pods with an ordinal greater than or equal to the partition number will use the new revision while those with an ordinal less than the partition number will use the old revision. If there is a leader pod with an ordinal less than the partition number getting deleted for some reasons, the current behavior is that

  1. All corresponding worker pods will get restarted
  2. The lws controller will recreate the leader pod and the corresponding worker statefulset
  3. The leader pod will use the old revision of the leader template following the partitioned rolling update behavior
  4. The worker pods will use the new revision of the worker template since the lws definition is already updated, https://github.com/kubernetes-sigs/lws/blob/main/pkg/controllers/pod_controller.go#L263.

Since old leader template may not be compatible with the new worker template, the new recreated group will likely crush and get restarted after startupProbe failed.

There can be multiple reasons why a leader pod with an ordinal less than the partition number can be deleted:

  1. The service needs to do OS patch for the underlying hosts during the same deployment where the LWS is updated and needs an rolling update. The host patching will randomly update nodes and can restart the node hosting leader pods with ordinal less than the partition number.
  2. Those nodes being used may hit some memory/disk usage condition and get restarted.

What you expected to happen:
When a lws group is restarted, I would expect the leader and worker pods using the templates from the same revision.

How to reproduce it (as minimally and precisely as possible):

  1. Triggering an rolling update for the LWS
  2. Find a leader pod with an ordinal less than the .spec.updateStrategy.rollingUpdate.partition from the leader statefulset and delete that leader pod
  3. The restarted group with have leader pod using old leader template while worker pods uses new worker template

Anything else we need to know?:
Similar to the issue #225

Environment:

  • Kubernetes version (use kubectl version): 1.29
  • LWS version (use git describe --tags --dirty --always): 0.3.0
  • Cloud provider or hardware configuration: AWS EKS
  • OS (e.g: cat /etc/os-release): Amazon Linux
  • Kernel (e.g. uname -a): Linux ip-21-16-69-244.ec2.internal 5.10.224-212.876.amzn2.x86_64 Upload Leaderworkerset repository #1 SMP Thu Aug 22 16:55:24 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools:
  • Others:
@xiaohongchen1991 xiaohongchen1991 added the kind/bug Categorizes issue or PR as related to a bug. label Dec 12, 2024
@Edwinhr716
Copy link
Contributor

Isn't this issue a duplicate of #238?

@xiaohongchen1991
Copy link
Author

You are right, this is the same issue as #238. Didn't notice there is already one created there.

@xiaohongchen1991
Copy link
Author

Duplicate of #238

@xiaohongchen1991 xiaohongchen1991 closed this as not planned Won't fix, can't repro, duplicate, stale Dec 13, 2024
@ahg-g
Copy link
Contributor

ahg-g commented Jan 4, 2025

@xiaohongchen1991 we believe we have fixed this issue and all other issues related to rolling updates via #277.

The controller revision approach avoids depending on template hash to detect updates, and allows us to preserve the old template to re-create failed groups before they get their chance to update.

We aim to release 0.5 next week, but if you would like, you can test it now using the dev version (see instructions at https://github.com/kubernetes-sigs/lws/blob/main/docs/setup/install.md#install-the-latest-development-version). Looking forward for your feedback.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

3 participants