diff --git a/keps/prod-readiness/sig-node/5307.yaml b/keps/prod-readiness/sig-node/5307.yaml new file mode 100644 index 00000000000..5b04bcc7da9 --- /dev/null +++ b/keps/prod-readiness/sig-node/5307.yaml @@ -0,0 +1,3 @@ +kep-number: 5067 +alpha: + approver: "johnbelamaric" diff --git a/keps/sig-node/5307-container-restart-policy/README.md b/keps/sig-node/5307-container-restart-policy/README.md new file mode 100644 index 00000000000..193e09869a6 --- /dev/null +++ b/keps/sig-node/5307-container-restart-policy/README.md @@ -0,0 +1,1017 @@ + +# KEP-5307: Container Restart Policy + + + + + + +- [Release Signoff Checklist](#release-signoff-checklist) +- [Summary](#summary) +- [Motivation](#motivation) + - [Goals](#goals) + - [Non-Goals](#non-goals) +- [Proposal](#proposal) + - [User Stories (Optional)](#user-stories-optional) + - [Story 1](#story-1) + - [Notes/Constraints/Caveats (Optional)](#notesconstraintscaveats-optional) + - [Risks and Mitigations](#risks-and-mitigations) + - [Controllers managing pod failures](#controllers-managing-pod-failures) + - [Unintended Restart Loops](#unintended-restart-loops) +- [Design Details](#design-details) + - [Test Plan](#test-plan) + - [Prerequisite testing updates](#prerequisite-testing-updates) + - [Unit tests](#unit-tests) + - [Integration tests](#integration-tests) + - [e2e tests](#e2e-tests) + - [Graduation Criteria](#graduation-criteria) + - [Alpha](#alpha) + - [Beta](#beta) + - [GA](#ga) + - [Upgrade / Downgrade Strategy](#upgrade--downgrade-strategy) + - [Version Skew Strategy](#version-skew-strategy) +- [Production Readiness Review Questionnaire](#production-readiness-review-questionnaire) + - [Feature Enablement and Rollback](#feature-enablement-and-rollback) + - [Rollout, Upgrade and Rollback Planning](#rollout-upgrade-and-rollback-planning) + - [Monitoring Requirements](#monitoring-requirements) + - [Dependencies](#dependencies) + - [Scalability](#scalability) + - [Troubleshooting](#troubleshooting) +- [Implementation History](#implementation-history) +- [Drawbacks](#drawbacks) +- [Alternatives](#alternatives) + - [Wrapping entrypoint](#wrapping-entrypoint) + - [Non-declarative (callbacks based) restart policy](#non-declarative-callbacks-based-restart-policy) +- [Infrastructure Needed (Optional)](#infrastructure-needed-optional) + + +## Release Signoff Checklist + + + +Items marked with (R) are required *prior to targeting to a milestone / release*. + +- [ ] (R) Enhancement issue in release milestone, which links to KEP dir in [kubernetes/enhancements] (not the initial KEP PR) +- [ ] (R) KEP approvers have approved the KEP status as `implementable` +- [ ] (R) Design details are appropriately documented +- [ ] (R) Test plan is in place, giving consideration to SIG Architecture and SIG Testing input (including test refactors) + - [ ] e2e Tests for all Beta API Operations (endpoints) + - [ ] (R) Ensure GA e2e tests meet requirements for [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md) + - [ ] (R) Minimum Two Week Window for GA e2e tests to prove flake free +- [ ] (R) Graduation criteria is in place + - [ ] (R) [all GA Endpoints](https://github.com/kubernetes/community/pull/1806) must be hit by [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md) +- [ ] (R) Production readiness review completed +- [ ] (R) Production readiness review approved +- [ ] "Implementation History" section is up-to-date for milestone +- [ ] User-facing documentation has been created in [kubernetes/website], for publication to [kubernetes.io] +- [ ] Supporting documentation—e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes + + + +[kubernetes.io]: https://kubernetes.io/ +[kubernetes/enhancements]: https://git.k8s.io/enhancements +[kubernetes/kubernetes]: https://git.k8s.io/kubernetes +[kubernetes/website]: https://git.k8s.io/website + +## Summary + + + +This KEP introduces restart policies for a container so kubelet can apply +those rules on container failure. It extends the existing Never, OnFailure, +and Always restart policies. This will allow users to configure special exit +codes of the container, representing the restart of a task, to be treated as +non-failure even if the Pod has a restartPolicy=Never. This scenario is +important for use cases, when rescheduling of a task is very expensive, +and restarting in-place is preferable. + +## Motivation + + + +With the proliferation of AI/ML training jobs where each job takes hundreds +of Pods, each using expensive hardware and very expensive to schedule, +in-place restarts are becoming more and more important. + +Consider the example, the Pod is a part of a large training job. The progress +of each training “step” is only made when all Pods completed the calculation +for this step. Each Pod starts from a checkpoint, they all make progress +together, and write a new checkpoint. If any of Pods failed, the fastest +way to restart the calculation is to interrupt all Pods by restarting them, +so they all will start from the previous checkpoint. Thus, a special handling +of this restart is required. + +There are a few reasons why the OnFailure restart policy will not work: + +1) The cases of failed hardware must result in a Pod failure and rescheduling. +There needs to be a differentiation of these two failures - caused by hardware +issue and caused by a in-place restart request. + +2) Pods are often parts of JobSets with the Job failure policy configured +(see https://kubernetes.io/docs/tasks/job/pod-failure-policy/). The pod +failure policy is a server-side policy and is not compatible with the Pods +with restartPolicy OnFailure. + +### Goals + + + +- Allow the Pod with the restartPolicy=Custom to keep restarting a container +on a specific exit code. +- Allow extensibility of an API to support more scenarios in future. + +### Non-Goals + + + +- Implement the ​​maxRestartTimes https://github.com/kubernetes/enhancements/issues/3322 +- Support all possible restart policy rules in this KEP - some may be ideas for the future + + +## Proposal + + + +### User Stories (Optional) + + + +#### Story 1 + +The Pod with two containers - one is the main container and one is the sidecar +container. The Pod represents one of the pods in a large AI/ML training workload. +As training jobs are similar in nature, processing of a next job may be done using +the same Pods. Sidecar container coordinates the jobs. Once a new job needs to be +started, sidecar containers are killing the main job, and expect that the container +will be restarted quickly and pick up a new task to execute. + +Since those jobs are often declared as a Job, with [pod failure policy](https://kubernetes.io/docs/tasks/job/pod-failure-policy/), +the error that was the result of a sidecar killing the container should be +ignored on kubelet and container needs to be restarted quickly. + +See https://github.com/kubernetes-sigs/jobset/issues/876 for the detailed description. + +### Notes/Constraints/Caveats (Optional) + + + +The container level restart policy may restart the container differently from +the existing assumptions on pod level restart policy. Pod with `restartPolicy=Never` +may have containers restarted and have the restart count higher than 0. This may +also affect how Job `podFailurePolicy` interacts with pod failures. + +### Risks and Mitigations + + + +#### Controllers managing pod failures + +Controllers that manage workloads based on Pod failures (like Job with +podFailurePolicy) might misinterpret the Pod's state. A container restarting +due to an "ignored" exit code might not increment the Pod's restart count +in a way that these controllers expect for failure accounting. + +In this case, Job controller with podFailurePolicy only execute the policy +after the Pod is no longer in Running phase. A container restart will not +change the Pod's phase. Therefore this works fine with Job's podFailurePolicy +without any necessary changes to the Job controller or Job API. + +#### Unintended Restart Loops + +A container might persistently exit with an "ignored" exit code due to +an unresolvable underlying issue, leading to frequent restarts that consume +node resources and potentially mask the problem. + +The container restart will still follow the exponential backoff to avoid +excessive resource consumption due to restarts. + +## Design Details + + + +The proposal is to implement a simple API with the very limited set of +allowed values to [Container](https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/core/types.go#L2528) +under k8s.io/apis/core. The shape of API is informed by some future improvements +we may need to implement as described here: +https://github.com/kubernetes/enhancements/issues/3329#issuecomment-1571643421 + +```yaml +restartPolicy: Never +containers: +- name: myapp1 + # the default behavior is inherited from the Pod’s restartPolicy + restartPolicy: Custom + # pod-level API for specifying container restart rules + restartRules: + - action: Ignore # Ignore the specific exit code as “not a failure” + onExitCodes: + operator: In + values: [42] +``` + +The proposal is to support the following combinations: + +- The action can only be `Ignore`; +- Only `onExitCodes` rules are allowed, no other conditions; +- The `operator` can be either `In` or `NotIn`; +- Values only support an array of integers and no wildcard. Empty list +is supported to express “any”. + +With the limitations above, an API will do nothing for pods with the +`restartPolicy=Always` as `Ignore` for specific exit code are restartable +already. Same for the `restartPolicy=OnFailure`, except the exit code 0 +can be configured to be restartable, which is effectively the same as +`restartPolicy=Always`. + +For the pod with the `restartPolicy=Never`, it will allow restarting +the container for the subset of exit codes. The sync and restart logic +will be implemented in k8s.io/kubelet/container. + +This API change is only intended to restart the container if the container +itself exited with the given list of exit codes. It is not intended to change +the behavior of other means that lead to container being restarted, for +example, pod resize or pod restart. + +The restart count will increment on each container exit. So there will be +a new possibilities - Pod with the `restartPolicy=Never` may have containers +with the restart count higher than 0. It was only possible for sidecar +containers. + +This works with Job `podFailurePolicy` without any changes on Job API. Currently, +Job only checks for `podFailurePolicy` after the Pod has finished running. +Kubelet restarting the container of the Pod will not change the Pod's status. +This is the ideal improvement where the Job configured `podFailurePolicy` +for hardware failures (Pod needs to be rescheduled to other nodes), and container +configured `restartPolicy` to restart in-place for training errors. + +The container restart will follow all applicable backoff strategy to ensure +the container does not overconsume resources. + +### Test Plan + + + +[x] I/we understand the owners of the involved components may require updates to +existing tests to make this code solid enough prior to committing the changes necessary +to implement this enhancement. + +##### Prerequisite testing updates + + + +##### Unit tests + + + + + +- `k8s.io/apis/core`: `` - `` +- `k8s.io/apis/core/v1/validations`: `` - `` +- `k8s.io/features`: +- `k8s.io/kubelet`: +- `k8s.io/kubelet/container`: + +##### Integration tests + + + + + +Unit and E2E tests provide sufficient coverage for the feature. Integration tests may be added to cover any gaps that are discovered in the future. + +##### e2e tests + + + +- Verify that containers can specify restartPolicy. +- Verify that containers exited with exit codes specified in the restart +policy are restarted and the pod keeps running. +- Verify that containers exited with exit codes not specified in the restart +policy are not restarted and the pod fails. + +### Graduation Criteria + + + +#### Alpha + +- Container restart policy added to the API. +- Container restart policy implemented behind a feature flag. +- Initial e2e tests completed and enabled. + +#### Beta + +- Container restart policy functionality running behind feature flag +for at least one release. +- Container restart policy runs well with Job controller. + +#### GA + +- No major bugs reported for three months. +- User feedback (ideally from at least two distinct users) is green. + +### Upgrade / Downgrade Strategy + + + +API server should be upgraded before Kubelets. Kubelets should be downgraded before the API server. + +### Version Skew Strategy + + + +Previous kubelet client unaware of the container restart policy will ignore +this field and keep the existing behavior determined by pod's restart policy. + +## Production Readiness Review Questionnaire + + + +### Feature Enablement and Rollback + + + +###### How can this feature be enabled / disabled in a live cluster? + + + +- [ ] Feature gate (also fill in values in `kep.yaml`) + - Feature gate name: + - Components depending on the feature gate: +- [ ] Other + - Describe the mechanism: + - Will enabling / disabling the feature require downtime of the control + plane? + - Will enabling / disabling the feature require downtime or reprovisioning + of a node? + +###### Does enabling the feature change any default behavior? + + + +###### Can the feature be disabled once it has been enabled (i.e. can we roll back the enablement)? + + + +###### What happens if we reenable the feature if it was previously rolled back? + +###### Are there any tests for feature enablement/disablement? + + + +### Rollout, Upgrade and Rollback Planning + + + +###### How can a rollout or rollback fail? Can it impact already running workloads? + + + +###### What specific metrics should inform a rollback? + + + +###### Were upgrade and rollback tested? Was the upgrade->downgrade->upgrade path tested? + + + +###### Is the rollout accompanied by any deprecations and/or removals of features, APIs, fields of API types, flags, etc.? + + + +### Monitoring Requirements + + + +###### How can an operator determine if the feature is in use by workloads? + + + +###### How can someone using this feature know that it is working for their instance? + + + +- [ ] Events + - Event Reason: +- [ ] API .status + - Condition name: + - Other field: +- [ ] Other (treat as last resort) + - Details: + +###### What are the reasonable SLOs (Service Level Objectives) for the enhancement? + + + +###### What are the SLIs (Service Level Indicators) an operator can use to determine the health of the service? + + + +- [ ] Metrics + - Metric name: + - [Optional] Aggregation method: + - Components exposing the metric: +- [ ] Other (treat as last resort) + - Details: + +###### Are there any missing metrics that would be useful to have to improve observability of this feature? + + + +### Dependencies + + + +###### Does this feature depend on any specific services running in the cluster? + + + +### Scalability + + + +###### Will enabling / using this feature result in any new API calls? + + + +###### Will enabling / using this feature result in introducing new API types? + + + +###### Will enabling / using this feature result in any new calls to the cloud provider? + + + +###### Will enabling / using this feature result in increasing size or count of the existing API objects? + + + +###### Will enabling / using this feature result in increasing time taken by any operations covered by existing SLIs/SLOs? + + + +###### Will enabling / using this feature result in non-negligible increase of resource usage (CPU, RAM, disk, IO, ...) in any components? + + + +###### Can enabling / using this feature result in resource exhaustion of some node resources (PIDs, sockets, inodes, etc.)? + + + +### Troubleshooting + + + +###### How does this feature react if the API server and/or etcd is unavailable? + +###### What are other known failure modes? + + + +###### What steps should be taken if SLOs are not being met to determine the problem? + +## Implementation History + + + +## Drawbacks + + + +## Alternatives + + + +### Wrapping entrypoint + +One way to implement this KEP as a DIY solution is to wrap the entrypoint +of the container with the program that will implement this exit code handling +policy. This solution does not scale well as it needs to be working on multiple +Operating Systems across many images. So it is hard to implement universally. + +### Non-declarative (callbacks based) restart policy + +An alternative to the declarative failure policy is an approach that allows +containers to dynamically decide their faith. For example, a callback is called +on an “orchestration container” in a Pod when any other container has failed. +And the “orchestration container” may decide the fate of this container - restart +or keep as failed. + +This may be a possibility long term, but even then, both approaches can work +in conjunction. + +## Infrastructure Needed (Optional) + + diff --git a/keps/sig-node/5307-container-restart-policy/kep.yaml b/keps/sig-node/5307-container-restart-policy/kep.yaml new file mode 100644 index 00000000000..a84b01916d1 --- /dev/null +++ b/keps/sig-node/5307-container-restart-policy/kep.yaml @@ -0,0 +1,45 @@ +title: Container Restart Policy +kep-number: 5307 +authors: + - "@yuanwang04" + - "@SergeyKanzhelev" +owning-sig: sig-node +participating-sigs: +status: provisional +creation-date: 2025-05-16 +reviewers: + - "@SergeyKanzhelev" +approvers: + - "@dchen1107" # SIG Node approval + +see-also: + - "/keps/prod-readiness/sig-apps/3329" + - "/keps/sig-bbb/2345-everyone-gets-a-kep" + +# The target maturity stage in the current dev cycle for this KEP. +# If the purpose of this KEP is to deprecate a user-visible feature +# and a Deprecated feature gates are added, they should be deprecated|disabled|removed. +stage: alpha + +# The most recent milestone for which work toward delivery of this KEP has been +# done. This can be the current (upcoming) milestone, if it is being actively +# worked on. +latest-milestone: "v1.34" + +# The milestone at which this feature was, or is targeted to be, at each stage. +milestone: + alpha: "v1.34" + beta: "v1.35" + stable: "v1.36" + +# The following PRR answers are required at alpha release +# List the feature gate name and the components for which it must be enabled +feature-gates: + - name: MyFeature + components: + - kubelet +disable-supported: true + +# The following PRR answers are required at beta release +metrics: + - my_feature_metric