-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Introduce distributionVersion field for improved Kubernetes Distribution version handling #11816
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
This issue is currently awaiting triage. If CAPI contributors determine this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Thanks for filing this issue as a follow up of the discussion on the PR! Somehow related, we should consider also KEP-4330: Compatibility Versions in Kubernetes which is introducing emulated-versions and min compatibility version as a key info to influence the cluster behaviour (see user stories in the KEP) |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
Discussed in the 22nd of May office hours
Unfortunately, the idea of leaving the spec.version field under some circumstances is going to create many problems in CAPI. We need a plan (possibily not invasive) to address the fact that CAPI is full of code paths where the code assumes that version is set and represents a K8s version, and this is at the core of fundationational construct like the entire upgrade process and all the relelated test machinery. Hopefully previous comments on the same topic might help in starting the work on this plan
Also, worth to recall #11816 (comment) from above |
Would moving it to status (mirror it when spec is defined, and derive from distributionVersion when not) be too disruptive? In any case I'll take all this into account and get back with a possibly improved and more detailed proposal, thanks for the feedback - much appreciated 🙏 |
What component would be responsible for this?
This could be beneficial for more than just the proposal here. Do we do any validation of upgrades at the moment? If we had the option of spec (I want this) and status (controller has verified through some rules that the upgrade is permitted), then that could unlock more in the way of pre-flight checks couldn't it? I know within OpenShift for example we have similar patterns in other places where a user requests "I want this" and we validate that transition before accepting, and then the controllers observe the status where we've said "yes, we verify this transition is acceptable" |
What would you like to be added (User Story)?
As a user I would like to have a distributionVersion field to better handle versioning of the kubernetes distribution being installed
Detailed Description
Currently, the handling of Kubernetes versions in the version field is problematic when installing or upgrading a distribution of Kubernetes that utilizes its own version scheme and lifecycle. A distribution can include more software components than just Kubernetes, and/or a distinct support lifecycle that includes the release of fixes and other changes on its own timeline. Those characteristics can necessitate the use of an independent version scheme.
The version field, present in multiple resources: ControlPlane, MachineSpec, Cluster Topology, specifically represents the Kubernetes version. In order to control the cluster’s distribution version, we need to specify its value for the ControlPlane and MachineSet/MachineDeployment objects involved. This is problematic when a user wants to deploy a specific version of a kubernetes distribution, they cannot specify the version, but it should be calculated based on the related kubernetes version (which is not always possible in any case).
To address this, we propose introducing a new field,
spec.distributionVersion
in the following resources:The new field and the current
spec.version
would be mutually exclusive.This would be an optional field for both the ControlPlane contract and MachineSpec. Its value would be the distribution version (e.g. for OpenShift it should be something like v4.17.0). No version semantics will be imposed on the field.
If
distributionVersion
is present, thestatus.version
should be populated with the related kubernetes version (e.g. for OpenShift with distributionVersion with value 4.17.0,status.version
should be 1.30.4).All logic that makes a decision based on evaluating the kubernetes version should rely on the status field instead of the spec field.
Related:
#11564
/cc @fabriziopandini @enxebre @sbueringer
Anything else you would like to add?
No response
Label(s) to be applied
/kind feature
/area api
The text was updated successfully, but these errors were encountered: