-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support newer patch versions of Kubernetes in setup-envtest
#2583
Comments
setup-envtest
setup-envtest
/kind support |
Agree, this would definitely be nice. controller-runtime only contains the setup-envtest binary though, which is only the CLI to download the envtest binaries. The Kubernetes binaries are published via the tools-release branch in the kubebuilder repo: https://github.com/kubernetes-sigs/kubebuilder/tree/tools-releases So probably we should move this issue over there (@camilamacedo86 WDYT?) |
Hi @jonathan-innis, I wanted to update you regarding the binaries for our project. All binaries that have been built are available at https://storage.googleapis.com/kubebuilder-tools ( branch of kubebuilder project ). As @sbueringer mentioned, these are built whenever we merge a PR into the tools-releases branch of kubebuilder. You'll notice that we continually generate new versions using the latest updates. However, it's important to note that we do not backport or release older patches due to the current implementation constraints. On a related note, regarding the generation of these binaries, I believe that the primary responsibility falls under Controller Runtime rather than Kubebuilder (ENV TEST lib implementation is in Controller-Runtime). Kubebuilder primarily acts as a consumer of Controller Runtime as ENV TEST, utilizing it more like a library. That being said, your suggestion caught my attention, and I genuinely support the idea of bringing changes to our current process. In an ideal scenario, I'd advocate for drafting a formal proposal to shift the binary generation responsibility to Controller-Runtime. This move seems more aligned with the domain of responsibilities. However, if you're inclined to propose alternative methods or improvements to our current process, I encourage you to do so. Feel free to open PRs against the relevant branch and share your thoughts. Your contributions and ideas are always welcome and highly valued in our community. Looking forward to your thoughts and contributions. Best regards, |
I wonder if there's a way to consume Kubernetes release artifacts directly |
@sbueringer I was thinking the same thing. I'm curious why we are repeating the work that's required to re-produce and mirror these binaries if we could just grab the latest released version within a passed-through minor version. |
Not sure if k/k releases etcd / kube-apiserver etc as individual binaries, but pulling container images and extracting them might be an option. Needs some investigation. |
Hi @jonathan-innis, @sbueringer, @troy0820: To answer your question/How it works and why:
For cases where the project provides the binaries, we gather them to build the tarball. However, we require a tarball with binaries compatible across more operating systems and architectures, which they do not provide. For these scenarios, we are compelled to build them ourselves. This was our initial motivation From: Kubebuilder Tools Releases Do we now have Complexities to achieve what is asked here:However, your concern here does not seem to address the complexities of what you are asking and aim to achieve: The creation of tarballs for patch releases retroactively. Even when we have Kubernetes 1.26, we would still create tarballs for 1.25, 1.24, 1.23, and so on. IHMO: What is complex is not generated but automated. WHY? Following is what seems required in POV to achieve what you are asking here:
💡 Suggestion/Proposed Solution
It would fit great in the kubebuilder docs. |
Yeah, I see the concern here with all the complexity we are talking about. We are basically talking about a re-write of the way that the repo is maintained and the workflow is released. Definitely seems like a cost/benefit trade-off given the number of people who I assume are maintaining and contributing to the project and the benefit to users testing (assuming they need the newer patch releases). This one became of particular interest to us because of the bug that I outlined above which causes consistent flakes in our CI testing but I can see the case that these instance are few and far between.
As an aside, it seems surprising to me that kubebuilder is supporting and publish more diverse binaries than the upstream Kubernetes release process. Is there a reason they don't publish through a similar mechanism and then we just get to pull all of them for free without having to build some forms of them manually? |
Hi @jonathan-innis,
The complexities are not only about the effort to change:
💡 Enhancement of Previous Suggestion/Proposed Solution:We propose a solution where users generate the tarball locally using a Dockerfile. This approach eliminates the need for us to maintain these artifacts, allowing users the flexibility to generate them for any version they require. We can integrate the Dockerfile as part of the setup environment test script in controller-runtime. Alternatively, incorporating it into the Kubebuilder scaffold is also a viable option. However, it's important to note that if the Dockerfile is only included in the Kubebuilder scaffold, it may not be accessible to users who rely on controller-runtime and envtest but do not use Kubebuilder. @sbueringer wdyt? ^ |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
Some context:
When implementing the envtest binary publishing in controller-tools we also implemented it in a way that it's very easy to publish binaries for more versions of Kubernetes. An example can be found here: kubernetes-sigs/controller-tools#924 (Please note that we have some work to do before setup-envtest will be able to consume envtest binaries from controller-tools releases) |
/remove-lifecycle rotten |
Recently merged #2811 So now envtest binaries from the new location (controller-tools release attachments) can be used For available releases please either check via setup-envtest or you can also see them here: https://github.com/kubernetes-sigs/controller-tools/releases If you need additional versions, feel free to open a PR against controller-tools, for prior art see: kubernetes-sigs/controller-tools#956 or any of the other PRs: https://github.com/kubernetes-sigs/controller-tools/pulls?q=is%3Apr+sort%3Aupdated-desc+is%3Aclosed+%22Release+envtest%22 Closing the issue as this should cover what we need. Please let me know if I'm missing anything then we can reopen /close |
@sbueringer: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Right now, we are hitting transient errors with the kube-apiserver due to our testing of CEL. We are using
setup-envtest
to download the binaries needed forsetup-envtest
from the Google Cloud mirror.The issues we are hitting with respect to CEL and the apiserver are documented in: kubernetes/kubernetes#114857. This change was cherry-picked across releases which can be seen at the bottom of the issue here: kubernetes/kubernetes#114661.
From looking at the
setup-envtest
manifest, I can see thatsetup-envtest
is often not taking up newer cherry-picked releases into the mirror, meaning that this fix to thekube-apiserver
that was added in1.25.6
,1.26.6
, etc. is not released withsetup-envtest
, which means that we are still using an older version of the apiserver that is buggy and causing CI failures.It would be awesome if, as Kubernetes released new cherry-picks of each of these binaries,
setup-envtest
would mirror these binaries over to the Google Cloud mirror so that we could always have the most up-to-date version of the binary running against any given minor version that we are testing.The text was updated successfully, but these errors were encountered: