Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

check the scalability of one kind prior to get its scale subresource from apiserver #6431

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

yunwang0911
Copy link

What type of PR is this?

/kind bug

What this PR does / why we need it:

In the VPA recommender, the controller fetcher gets the top most well known or scalable controller using the function FindTopMostWellKnownOrScalable. It works ok in most of the time, but it will lead to client-go limitation when the top most controller isn't well known and isn't scalable, of course, the vpa number is large, like 3k+. Since it tried to get the top most controller's scale subresource for every top most object. This PR checks the top most controller's scale subresource by finding the resource name <pluralObject>/scale in the cached discovery client. Of course, if the top most controller has no scale subresource, it fails, and return nil directly. All the process is handled in memory, no need to query kube-apiserver

Which issue(s) this PR fixes:

Fixes #

Special notes for your reviewer:

Does this PR introduce a user-facing change?

No user-facing change.

check the top most controller's scale subresource by finding the resource name `<pluralObject>/scale` in the cached discovery client.

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Jan 9, 2024
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: yunwang0911
Once this PR has been reviewed and has the lgtm label, please assign jbartosik for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@yunwang0911
Copy link
Author

@kgolab @kwiesmueller @jbartosik @voelzmo @krzysied Can anyone review this PR kindly?

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Feb 2, 2024
Copy link
Contributor

@voelzmo voelzmo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR!

I have one direct comment about the code changes, where I'm not sure it can actually work this way.

Apart from that single comment: Could you help me understand a bit more what the original problem that you're trying to solve is? Which calls exactly are you trying to avoid and which scenario do they happen?
We do already have a cache implementation for the scale subresource per resource, so that's probably not it?

In your PR description you write

It works ok in most of the time, but it will lead to client-go limitation when the top most controller isn't well known and isn't scalable (...)

The topmost resource will need to be either well-known or scalable, otherwise the VPA will throw an error about misconfiguration. A similar error happens if the rousece in the targetRef is not scalable or well-known.

Maybe you can offer a concrete example where you see this happening, to help me understand better what this PR is solving. Thanks!

@@ -206,6 +209,12 @@ func (f *controllerFetcher) getParentOfController(controllerKey ControllerKeyWit
return getParentOfWellKnownController(informer, controllerKey)
}

// check if it's scalable
scalable := f.isScalable(controllerKey.ApiVersion, controllerKey.Kind)
if !scalable {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure this early exit actually works?
A Pod might have an owner reference that is neither well known nor scalable. As long as the topmost owner (the end of the "ownership chain") is scalable, this should be sufficient. With this change, we would return nil, as if this controller didn't have an owner, which doesn't seem correct.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if the topmost owner (the end of the "ownership chain") isn't scalable? Even the spec.TargetRef is scalable, this VPA still won't be synced

@yunwang0911
Copy link
Author

yunwang0911 commented Mar 8, 2024

Which calls exactly are you trying to avoid and which scenario do they happen?

For example, there is one CRD, called GlobalDeployment, which is the owner of the deployment. And it has no scale subresource.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
  ownerReferences:
  - apiVersion: apps.xx.xx/v1
    controller: true
    kind: GlobalDeployment
    name: nginx
    uid: xxx
spec:
  ...

And the targetRef of VPA is one deployment.
Then, according to the relationship Deployment -> GlobalDeployment, the topmost owner is GlobalDeployment even though the vpa targetRef is Deployment. The function FindTopMostWellKnownOrScalable will check whether or not GlobalDeployment is well known or scalable. It's easy to check whether or not GlobalDeployment is well known. However, it's hard to check whether it's scalable. Even though we do have a cache implementation for the scale subresource per resource, it's not suitable for the GlobalDeployment because it's not scalable, it has no scale subresource. Then, it tries to get scale subresource from apiserver every time. Of course, it'll fail. Then it retries, it fails...
Instead of getting scale subresource from apiserver every time, this PR validates scale subresource by checking local cached APIResource using cached discovery client.

@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 8, 2024
@voelzmo
Copy link
Contributor

voelzmo commented Mar 13, 2024

Thanks for adding additional context, @yunwang0911! I think I understand your use-case and what you're trying to improve here. I'm curious if you saw a decrease in calls made to the kube-apiserver after running a vpa-recommender with these changes?

Looking at the code paths that your PR is touching, leads me to believe that you're trying to improve the flow in FindTopMostWellKnownOrScalable.

In getParentOfController, we call getOwnerForScaleResource, which uses a RESTMapper to get the resources for a kind and then find out if the resource supports the /scale subresource.

Getting the REST mappings uses the very same CachedDiscoveryClient that you're using in your PR, and checking the /scale subresource is cached (even if the result was negative/error). Refreshes are done every 10 minutes, as this is the cache freshness time.

My thinking is that given the above mentioned amount of caching on that code path, this will probably not reduce the calls made to the kube-apiserver by much – except when you avoid that the item gets added to the cache and therefore doesn't get refreshed.

Is my understanding of your concerns correct until this point and you're mainly concerned with the 10 minute cache refresh interval?

If we want to change this, we can think about two different things:

  • Keep adding everything into the cache just like we currently do and change GetKeysToRefresh, such that we don't refresh (that often?) for certain error return codes
  • Don't add NotFound error items to the cache at all

In both cases, I'm still kind of concerned with the impact on scenarios where getting the /scale subresource didn't work because of RBAC permissions that eventually get fixed. In cases like this, we wouldn't want to make human interaction necessary, but this should rather resolve automatically.

WDYT?

groupResource := mapping.Resource.GroupResource()
scale, err := f.getScaleForResource(key.Namespace, groupResource, key.Name)
if err == nil && scale != nil {
expectedScaleResource := fmt.Sprintf("%ss/%s", strings.ToLower(kind), "scale")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This code assumes that the plural form of a resource is always in the form of singular+s. In many cases, this is true, but not all the time. See e.g. the prometheus CRD, where the plural is "prometheuses". It could also have been "prometheis". In short: I don't think it is possible to take this shortcut here.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This code assumes that the plural form of a resource is always in the form of singular+s. In many cases, this is true, but not all the time.

Yes, you are right. Any suggestion?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@voelzmo I tried to get the plural name from apiresource. Please take a look again

@yunwang0911
Copy link
Author

@voelzmo

I'm curious if you saw a decrease in calls made to the kube-apiserver after running a vpa-recommender with these changes?

Yes, the calls made to the apiserver indeed decrease.

Looking at the code paths that your PR is touching, leads me to believe that you're trying to improve the flow in FindTopMostWellKnownOrScalable.

Yes, I'm trying to improve the function FindTopMostWellKnownOrScalable.
This PR does one thing: check if any model supports the scale subresource by checking CachedDiscoveryClient rather than calling apiserver to try to get one specific scale subresource of one object of the model.
Specifically speaking, as you mentioned,

In getParentOfController, we call getOwnerForScaleResource, which uses a RESTMapper to get the resources for a kind and then find out if the resource supports the /scale subresource.

getScaleForResource function tries to get scale subresource from scaleSubresourceCacheStorage, and if it fails, it tries to get the scale subresource from apiserver. If it succeeds, it stores the scale subresource into the cache, otherwise it returns error.

func (f *controllerFetcher) getScaleForResource(namespace string, groupResource schema.GroupResource, name string) (controller *autoscalingapi.Scale, err error) {
	if ok, scale, err := f.scaleSubresourceCacheStorage.Get(namespace, groupResource, name); ok {
		return scale, err
	}
	scale, err := f.scaleNamespacer.Scales(namespace).Get(context.TODO(), groupResource, name, metav1.GetOptions{})
	f.scaleSubresourceCacheStorage.Insert(namespace, groupResource, name, scale, err)
	return scale, err
}

The scaleSubresourceCacheStorage stores the existent scale object. For example, I have an VPA "myns/vpa-test", which references my deployment "myns/nginx-test", and the scaleSubresourceCacheStorage will store an item which is the scale object of the deployment "myns/nginx-test".
However, if I have a CRD, named "MyDeployment", and it doesn't support scale subresource. On this condition, if I create a VPA "myns/vpa-test1", which references one MyDeployment "myns/mydeployment-test", what will happen? Of course, it will fail to sync the status of the vpa. However, the function getScaleForResource will do things below

1. checks scaleSubresourceCacheStorage, and finds that the scale object doesn't exist.
code -> ok, scale, err := f.scaleSubresourceCacheStorage.Get(namespace, groupResource, name)
2. tries to get scale subresource from apiserver, and it also fails
code -> scale, err := f.scaleNamespacer.Scales(namespace).Get(context.TODO(), groupResource, name, metav1.GetOptions{})
3. finally, return error

You see, for every this kind of object, it's required to call apiserver every time until the vpa is deleted.
In conclusion, it checks if one object is scalable by checking if its scale object exists, 1. in the scalesubresource cache; 2. from the apiserver.
Actually, we can check if one object is scalable by checking if the scale resource "/scale" exists in the cached discovery client.
The difference is that this PR wants to check if one model supports scale subresource, while the old logic wants to check if the scale object of one object of the model exists.

Old: check if the scale object of one object exists in the scaleSubresourceCacheStorage -> try to get the scale object from apiserver assuming that the scale object exists
Improved: check if the resource "<key>/scale" of the model exists in the cached discovery

As the function below shows, the function isScalable judges if the model is scalable by checking if the scale subresource of one model exists in the cached discovery client. This function doesn't need to call apiserver.

// isScalable returns true if the given controller is scalable.
// isScalable checks if the controller is scalable by checking if the resource "<key>/scale" exists in the cached discovery client.
func (f *controllerFetcher) isScalable(apiVersion string, kind string) bool {
	resourceList, err := f.cachedDiscoveryClient.ServerResourcesForGroupVersion(apiVersion)
	if err != nil {
		klog.Errorf("Could not find resources for %s: %v", apiVersion, err)
		return false
	}
	expectedScaleResource := fmt.Sprintf("%ss/%s", strings.ToLower(kind), "scale")
	for _, r := range resourceList.APIResources {
		if r.Name == expectedScaleResource {
			return true
		}
	}
	return false
}

@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Apr 4, 2024

for _, mapping := range mappings {
groupResource := mapping.Resource.GroupResource()
scale, err := f.getScaleForResource(key.Namespace, groupResource, key.Name)
Copy link
Contributor

@raywainman raywainman May 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if one reason we need to do this lookup to API Server is for RBAC purposes (as @voelzmo mentions a bit earlier)... Do you happen to know what the impact would be? (I'm digging around a bit myself but thought I'd throw it out there).

If VPA doesn't have the correct permissions on the object (aka /scale + get), this should fail? With the new discovery-based lookup, this would pass instead.

Copy link
Author

@yunwang0911 yunwang0911 May 14, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think so. If we want to check RBAC, we should use SubjectAccessReview. Besides, even it's passed here, getParentOfController->getOwnerForScaleResource->getScaleForResource the call chain will reach function getScaleForResource ultimately. What do you think?

groupResource := mapping.Resource.GroupResource()
scale, err := f.getScaleForResource(key.Namespace, groupResource, key.Name)
if err == nil && scale != nil {
expectedScaleResource := fmt.Sprintf("%s/%s", plural, "scale")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Assuming the RBAC issues mentioned earlier are OK)

Could we use the ScaleKindResolver?

If you look at the implementation, it is effectively doing what you are doing here by looping over the DiscoveryClient resources. Using that directly would cut down the amount of code needed here :)

Copy link
Author

@yunwang0911 yunwang0911 May 14, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point! I'll replace it by ScaleKindResolver. Thanks

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated

Copy link
Author

@yunwang0911 yunwang0911 May 14, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I misunderstand the function of ScaleKindResolver. Actually, it's not work as I expected. Let me elaborate the difference between current implement, ScaleKindResolver and the implement in this PR.

@yunwang0911 yunwang0911 force-pushed the perf branch 2 times, most recently from bad38ce to e5419e8 Compare May 14, 2024 09:20
@yunwang0911 yunwang0911 changed the title use cached discovery client to check scalable use cached discovery client to check scalability May 14, 2024
@yunwang0911 yunwang0911 changed the title use cached discovery client to check scalability check kind scalability instead of the scalability of resource May 14, 2024
@yunwang0911
Copy link
Author

@voelzmo @raywainman Sorry for my unclear explanation.
image
Let me elaborate the difference between the three methods below:

  1. current implement - scaleSubresourceCacheStorage
  2. ScaleKindResolver -
  3. this PR

One line to describe the difference: the former two methods check if the scale subresource resource of a given resource exists, while this PR checks if the Scale APIResource of the kind of a given resource exists.

Method 1: current implement - scaleSubresourceCacheStorage

call chain

1. FindTopMostWellKnownOrScalable -> 2. getParentOfController -> 3. getOwnerForScaleResource -> 4. getScaleForResource -> 5. scaleSubresourceCacheStorage.Get -> 6. scaleNamespacer.Get

At the fifth step, it tries to get from scaleSubresourceCacheStorage, if succeeds, that's good, but if fails it will get from apiserver.

scale, err := f.scaleNamespacer.Scales(namespace).Get(context.TODO(), groupResource, name, metav1.GetOptions{})

And it tries to get scale of resource namespace/name from apiserver, which means the times it calls apiservice might be equal to the number of the resources. If the resource is scalable, such as deployment, statefulset, it's ok, because next time it is able to get scale object from scaleSubresourceCacheStorage; however if the resource isn't scalable, the times might be bursh.
The scene might happen in the case below

pod -> replicaset -> deployment(scalable) -> DeploymentGroup(unscalable)

If there are a lot of DeploymentGroup(name is important here), and VPAs references Deployment, the recommender will break.

@raywainman
Copy link
Contributor

@yunwang0911 Thank you for the really thorough explanation, let me take a bit of time to look through this and get back to you (should have time tomorrow during the day EDT timezone).

@voelzmo
Copy link
Contributor

voelzmo commented May 17, 2024

Thanks for the nice picture, @yunwang0911! I still think that the cause for the many requests to the kube-apiserver is not the checks for the /scale resource (as I pointed out above, this is also backed by the same CachedDiscoveryClient – or maybe I'm misunderstanding how this works internally?)

In getParentOfController, we call getOwnerForScaleResource, which uses a RESTMapper to get the resources for a kind and then find out if the resource supports the /scale subresource.

Getting the REST mappings uses the very same CachedDiscoveryClient that you're using in your PR, and checking the /scale subresource is cached (even if the result was negative/error). Refreshes are done every 10 minutes, as this is the cache freshness time.

Instead, the issue is that we're adding the negative result (not found error) to the ScaleCache, and that we're doing this per targetRef not per Kind. So the cache will re-check every 10 minutes for every single targetRef.

We have also logic using the ControllerFetcher (and thereby its own instance of a ScaleCache) in the admission-controller.
Additionally, with https://github.com/kubernetes/autoscaler/pull/6460/files#diff-51c34291d2c70f6960e18f955f2a33928e8f5ac58d2d94fe0b7a8f4735f3eca3 we introduced a change that instantiates a ControllerFetcher and ScaleCache for the updater.

That being said, I think the way how the cache is designed doesn't really fit for what it tries to accomplish. I'd rather try to change how/what we're caching instead of working around this.

  • we don't use the cache for getting the actual selectors from the /scale subresource (which is good, because those are updated e.g. on rollout of a new Deployment version, for example), but we get the scale and fetch the selectors every time
  • we use the cache for determining if an object has a /scale subresource. The only method interacting with the cache is getScaleForResource
    • As you're pointing out in the discussions around this PR, this is a property that's not determined by an instance of a type (a specific object in a specific namespace), but rather by the kind itself – that's why you're trying to implement a shortcut for the negative case here.
    • the only places, in which getScaleForResource is called are inside the controller_fetcher:
      • to dermine if something is scalable
      • inside a method I'm not sure I understand 100% correctly: getOwnerForScaleSubresource is meant to get the owner/parent of a controller, and it does this by first checking if the resource has a /scale subresource and then returning the ownerReference. Seems like this is doing two things, and checking if it has a /scale subresource is again not a property of the instance itself, but of the kind

Sorry for the long comment, I'm trying to express that I do understand that you're trying to solve an important problem I don't think we should be putting another bandaid on the concept here. I might be misunderstanding how all of this works, maybe it is worthwhile trying to ping @jbartosik to get some historical context here?

@yunwang0911 yunwang0911 changed the title check kind scalability instead of the scalability of resource check the scalability of one kind prior to get its scale subresource from apiserver May 19, 2024
@yunwang0911
Copy link
Author

@voelzmo Yeah, checking if it has a /scale subresource is again not a property of the instance itself, but of the kind, this is what I want to express.

func (f *controllerFetcher) getOwnerForScaleResource(groupKind schema.GroupKind, namespace, name string) (*ControllerKeyWithAPIVersion, error) {
	...
	mappings, err := f.mapper.RESTMappings(groupKind)
	if err != nil {
		return nil, err
	}
	var lastError error
	for _, mapping := range mappings {
                 // only use RESTMapper to get the resource of the kind, not check if /scale is supported in this kind.
		groupResource := mapping.Resource.GroupResource()
		scale, err := f.getScaleForResource(namespace, groupResource, name)
		if err == nil {
			return getOwnerController(scale.OwnerReferences, namespace), nil
		}
		lastError = err
	}
	return nil, lastError
}
func (f *controllerFetcher) getScaleForResource(namespace string, groupResource schema.GroupResource, name string) (controller *autoscalingapi.Scale, err error) {
        // check cache
	if ok, scale, err := f.scaleSubresourceCacheStorage.Get(namespace, groupResource, name); ok {
		return scale, err
	}
        // if not exist in cache, call apiserver to get the scale of the kind, right?
	scale, err := f.scaleNamespacer.Scales(namespace).Get(context.TODO(), groupResource, name, metav1.GetOptions{})
	f.scaleSubresourceCacheStorage.Insert(namespace, groupResource, name, scale, err)
	return scale, err
}

Per the code, the RESTMapper is only used to convert GVK to GVR. And it checks scalability by two methods, 1. checking cache, 2. call apiserver to get /scale. The flow chart is as follow
image

However, we should check if the kind supports /scale before getting /scale from apiserver. As we all know, if one kind doesn't support /scale, it always fails when it gets /scale from apiserver. The logic is as follow
image

@raywainman
Copy link
Contributor

Going along with @voelzmo's suggestion above to perhaps flip this and look at the role of the cache in this...

Could we change the way the cache is keying entries? Instead of adding the resource into the cache, we can change it to only key by kind?

If it's not possible to change this cache (I haven't looked at all the users of this cache but looks like @voelzmo did do an analysis as well), maybe we think about creating a new cache with this new key schema and using it here?

Using this new "per-kind" cache, you are then only ever going once to API server per kind per 10 minutes.

Wdyt?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 21, 2024
@voelzmo
Copy link
Contributor

voelzmo commented Aug 30, 2024

@raywainman what about the ideas regarding changing how the cache works? Should we make an issue out of this and improve the amount of calls done?

If it's not possible to change this cache (I haven't looked at all the users of this cache but looks like @voelzmo did do an analysis as well), maybe we think about creating a new cache with this new key schema and using it here?

Maybe we can also ping @jbartosik for this one, given that he was driving the cache development back in the days?

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 4, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 4, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@yunwang0911
Copy link
Author

/reopen

@k8s-ci-robot
Copy link
Contributor

@yunwang0911: Failed to re-open PR: state cannot be changed. The perf branch was force-pushed or recreated.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@yunwang0911
Copy link
Author

/reopen

@k8s-ci-robot k8s-ci-robot reopened this Dec 13, 2024
@k8s-ci-robot
Copy link
Contributor

@yunwang0911: Reopened this PR.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Dec 13, 2024
@yunwang0911
Copy link
Author

yunwang0911 commented Dec 13, 2024

@voelzmo @raywainman I reopened this PR since I think this PR is really helpful. Besides, I saw your discussion in the PR #7517. I agreed that "we don't need to make the calls checking /scale for every object of a kind, it is fine to do it per kind or GroupResource." And this PR is for this purpose
PTAL

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/vertical-pod-autoscaler cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants