Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support newer patch versions of Kubernetes in setup-envtest #2583

Closed
jonathan-innis opened this issue Nov 17, 2023 · 15 comments
Closed

Support newer patch versions of Kubernetes in setup-envtest #2583

jonathan-innis opened this issue Nov 17, 2023 · 15 comments
Labels
kind/support Categorizes issue or PR as a support question.

Comments

@jonathan-innis
Copy link
Member

Right now, we are hitting transient errors with the kube-apiserver due to our testing of CEL. We are using setup-envtest to download the binaries needed for setup-envtest from the Google Cloud mirror.

The issues we are hitting with respect to CEL and the apiserver are documented in: kubernetes/kubernetes#114857. This change was cherry-picked across releases which can be seen at the bottom of the issue here: kubernetes/kubernetes#114661.

From looking at the setup-envtest manifest, I can see that setup-envtest is often not taking up newer cherry-picked releases into the mirror, meaning that this fix to the kube-apiserver that was added in 1.25.6, 1.26.6, etc. is not released with setup-envtest, which means that we are still using an older version of the apiserver that is buggy and causing CI failures.

It would be awesome if, as Kubernetes released new cherry-picks of each of these binaries, setup-envtest would mirror these binaries over to the Google Cloud mirror so that we could always have the most up-to-date version of the binary running against any given minor version that we are testing.

@jonathan-innis jonathan-innis changed the title Support newer patch version of Kubernetes in setup-envtest Support newer patch versions of Kubernetes in setup-envtest Nov 17, 2023
@troy0820
Copy link
Member

/kind support

@k8s-ci-robot k8s-ci-robot added the kind/support Categorizes issue or PR as a support question. label Nov 18, 2023
@sbueringer
Copy link
Member

sbueringer commented Nov 20, 2023

Agree, this would definitely be nice.

controller-runtime only contains the setup-envtest binary though, which is only the CLI to download the envtest binaries. The Kubernetes binaries are published via the tools-release branch in the kubebuilder repo: https://github.com/kubernetes-sigs/kubebuilder/tree/tools-releases

So probably we should move this issue over there (@camilamacedo86 WDYT?)

@camilamacedo86
Copy link
Member

Hi @jonathan-innis,

I wanted to update you regarding the binaries for our project. All binaries that have been built are available at https://storage.googleapis.com/kubebuilder-tools ( branch of kubebuilder project ). As @sbueringer mentioned, these are built whenever we merge a PR into the tools-releases branch of kubebuilder. You'll notice that we continually generate new versions using the latest updates. However, it's important to note that we do not backport or release older patches due to the current implementation constraints.

On a related note, regarding the generation of these binaries, I believe that the primary responsibility falls under Controller Runtime rather than Kubebuilder (ENV TEST lib implementation is in Controller-Runtime). Kubebuilder primarily acts as a consumer of Controller Runtime as ENV TEST, utilizing it more like a library.

That being said, your suggestion caught my attention, and I genuinely support the idea of bringing changes to our current process. In an ideal scenario, I'd advocate for drafting a formal proposal to shift the binary generation responsibility to Controller-Runtime. This move seems more aligned with the domain of responsibilities.

However, if you're inclined to propose alternative methods or improvements to our current process, I encourage you to do so. Feel free to open PRs against the relevant branch and share your thoughts. Your contributions and ideas are always welcome and highly valued in our community.

Looking forward to your thoughts and contributions.

Best regards,

@sbueringer
Copy link
Member

I wonder if there's a way to consume Kubernetes release artifacts directly

@jonathan-innis
Copy link
Member Author

@sbueringer I was thinking the same thing. I'm curious why we are repeating the work that's required to re-produce and mirror these binaries if we could just grab the latest released version within a passed-through minor version.

@sbueringer
Copy link
Member

sbueringer commented Nov 21, 2023

Not sure if k/k releases etcd / kube-apiserver etc as individual binaries, but pulling container images and extracting them might be an option. Needs some investigation.

@camilamacedo86
Copy link
Member

camilamacedo86 commented Nov 23, 2023

Hi @jonathan-innis, @sbueringer, @troy0820:

To answer your question/How it works and why:

@sbueringer I was thinking the same thing. I'm curious why we are repeating the work required to re-produce and mirror these binaries if we could just grab the latest released version within a passed-through minor version.

For cases where the project provides the binaries, we gather them to build the tarball. However, we require a tarball with binaries compatible across more operating systems and architectures, which they do not provide. For these scenarios, we are compelled to build them ourselves. This was our initial motivation

From: Kubebuilder Tools Releases

Screenshot 2023-11-23 at 12 42 17

Do we now have kube-apiserver (k8s), kubectl, and etcd provided by the projects for all these environments? If so, we should not be building but gathering for each architecture type. (as I understand that we are doing, see the Dockerfiles)


Complexities to achieve what is asked here:

However, your concern here does not seem to address the complexities of what you are asking and aim to achieve: The creation of tarballs for patch releases retroactively. Even when we have Kubernetes 1.26, we would still create tarballs for 1.25, 1.24, 1.23, and so on.

IHMO: What is complex is not generated but automated. WHY?

Following is what seems required in POV to achieve what you are asking here:

  • There is a need to establish a process for each "MAJOR" version and manage their updates. (Would we need a branch per MAJOR version instead of just one kubebuilder-tools? Or directories for each branch, then how would the trigger occur? )
  • A mechanism to automatically watch the project's releases and trigger these version bumps.
  • An additional complexity lies in automating the association of ETCD version X with the PATCH MAJOR version of kubectl and k8s. How will we know what version of ETCD should be used automatically in each case? Would it be required to read and parse its go.mod and check for the k8s versions there? If ETCD does not generate a version for the patch, then what version should we use? OR should then we skip this one? See that they have not necessary a version for each patch as well: https://github.com/etcd-io/etcd/releases

PS: Manually, we try to keep all the latest available bumped when they are released so we actually have patches until we start to generate tarballs for a new "MAJOR" k8s version.


💡 Suggestion/Proposed Solution

  • We can document what is inside of kubebuilder-tools (setup envtest)
  • We can describe how you can create your own tarball locally with the versions that you wish (see that we can run the Dockerfile locally or where it pleases you)
  • Provide an example of guidance

It would fit great in the kubebuilder docs.
Maybe here: https://book.kubebuilder.io/reference/envtest.html

@jonathan-innis
Copy link
Member Author

Following is what seems required in POV to achieve what you are asking here

Yeah, I see the concern here with all the complexity we are talking about. We are basically talking about a re-write of the way that the repo is maintained and the workflow is released. Definitely seems like a cost/benefit trade-off given the number of people who I assume are maintaining and contributing to the project and the benefit to users testing (assuming they need the newer patch releases).

This one became of particular interest to us because of the bug that I outlined above which causes consistent flakes in our CI testing but I can see the case that these instance are few and far between.

which they do not provide

As an aside, it seems surprising to me that kubebuilder is supporting and publish more diverse binaries than the upstream Kubernetes release process. Is there a reason they don't publish through a similar mechanism and then we just get to pull all of them for free without having to build some forms of them manually?

@camilamacedo86
Copy link
Member

camilamacedo86 commented Nov 24, 2023

Hi @jonathan-innis,

Yeah, I see the concern here with all the complexity we are talking about. We are basically talking about a re-write of the way that the repo is maintained and the workflow is released.

The complexities are not only about the effort to change:

  • Design of Data and Triggers for Retroactive Patching: We need a well-thought-out design for data structures and triggers that would allow for retroactive patching. I encourage you to review the current design to understand how this can be implemented.
  • Manual Effort vs Automatic Triggers: If we continue with manual efforts, we can trigger updates as needed. However, this approach doesn't guarantee consistency. To ensure regular and reliable updates, we should consider implementing automatic triggers. This raises the question: which version of ETCD is optimal for our needs? Additionally, how do we handle situations where there isn't a compatible ETCD release available?
  • Cost to maintain: As you see, it has a cost. I would really advocate for we change the current solution for a solution where we no longer need to generate the tarball and it could be generated locally, for example.

💡 Enhancement of Previous Suggestion/Proposed Solution:


We propose a solution where users generate the tarball locally using a Dockerfile. This approach eliminates the need for us to maintain these artifacts, allowing users the flexibility to generate them for any version they require.

We can integrate the Dockerfile as part of the setup environment test script in controller-runtime. Alternatively, incorporating it into the Kubebuilder scaffold is also a viable option. However, it's important to note that if the Dockerfile is only included in the Kubebuilder scaffold, it may not be accessible to users who rely on controller-runtime and envtest but do not use Kubebuilder.

@sbueringer wdyt? ^

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 22, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 23, 2024
@sbueringer
Copy link
Member

sbueringer commented Apr 19, 2024

Some context:

Hey folks,
we are currently experimenting a bit on setup-envtest because the envtest binaries are today hosted on the Google-owned kubebuilder repository.

Going forward we will start hosting the envtest binaries on controller-tools releases. We currently also plan to move setup-envtest to controller-tools as it fits better there. As part of that we will build & publish the setup-envtest binary on controller-tools releases in the future. This should also give proper versioning.
#2646 (comment)

When implementing the envtest binary publishing in controller-tools we also implemented it in a way that it's very easy to publish binaries for more versions of Kubernetes.

An example can be found here: kubernetes-sigs/controller-tools#924

(Please note that we have some work to do before setup-envtest will be able to consume envtest binaries from controller-tools releases)

@sbueringer
Copy link
Member

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Apr 19, 2024
@sbueringer
Copy link
Member

sbueringer commented May 23, 2024

@jonathan-innis

Recently merged #2811

So now envtest binaries from the new location (controller-tools release attachments) can be used

For available releases please either check via setup-envtest or you can also see them here: https://github.com/kubernetes-sigs/controller-tools/releases

If you need additional versions, feel free to open a PR against controller-tools, for prior art see: kubernetes-sigs/controller-tools#956 or any of the other PRs: https://github.com/kubernetes-sigs/controller-tools/pulls?q=is%3Apr+sort%3Aupdated-desc+is%3Aclosed+%22Release+envtest%22

Closing the issue as this should cover what we need. Please let me know if I'm missing anything then we can reopen

/close

@k8s-ci-robot
Copy link
Contributor

@sbueringer: Closing this issue.

In response to this:

@jonathan-innis

Recently merged #2811

So now envtest binaries from the new location (controller-tools release attachments) can be used

For available releases please either check via setup-envtest or you can also see them here: https://github.com/kubernetes-sigs/controller-tools/releases

If you additional versions, feel free to open a PR against controller-tools, for prior art see: kubernetes-sigs/controller-tools#956 or any of the other PRs: https://github.com/kubernetes-sigs/controller-tools/pulls?q=is%3Apr+sort%3Aupdated-desc+is%3Aclosed+%22Release+envtest%22

Closing the issue as this should cover what we need. Please let me know if I'm missing anything then we can reopen

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests

6 participants