-
Notifications
You must be signed in to change notification settings - Fork 4.2k
Handle Out of host capacity scenario in OCI nodepools #8315
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Hi @vbhargav875. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
6de2a02
to
8280205
Compare
strings.Contains(*node.NodeError.Message, "quota")) { | ||
*node.NodeError.Code == "QuotaExceeded" || | ||
(*node.NodeError.Code == "InternalError" && | ||
strings.Contains(*node.NodeError.Message, "Out of host capacity")) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't like string matching as a "contract". Is there no better way to have a hard error code that denotes out of host capacity?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As of today we do not have a better approach. Have added a comment to move away from this approach once we have an errorCode for OOHC in the API response.
np.manager.InvalidateAndRefreshCache() | ||
nodes, err := np.manager.GetNodePoolNodes(np) | ||
if err != nil { | ||
klog.V(4).Error(err, "error while performing GetNodePoolNodes call") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we already log an error somewhere (i.e. is this an extraneous log)?
If we get an error while scaling down, we shouldn't hide it behind v==4.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, added a error log with default verbosity in the GetNodePoolNodes function.
for _, node := range nodes { | ||
if node.Status != nil && node.Status.ErrorInfo != nil { | ||
if node.Status.ErrorInfo.ErrorClass == cloudprovider.OutOfResourcesErrorClass { | ||
klog.V(4).Infof("Using Compute to calculate nodepool size as nodepool may contain nodes without a compute instance.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm questioning whether this error log should be v==4 level or not.
If a customer sees an issue and requests our help, do we have enough information at the default log level to troubleshoot, or do they need to increase it?
Ideally, v==4 means verbose logging in case we need extra logging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, moved this to default verbosity.
klog.V(4).Error(err, "error while performing GetNodePoolNodes call") | ||
return err | ||
} | ||
if !decreaseTargetCheckViaComputeBool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We talked offline about how this isn't ideal and the ideal case is that Delete Node endpoint would be able to handle "deleting" these "ghost" instances.
We should leave a comment explaining why we are doing this way instead
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, added a comment explaining this.
8280205
to
9da0756
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes are good with me
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jlamillan, trungng92, vbhargav875 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@vbhargav875 you need to add a release note section to your description. |
What type of PR is this?
/kind bug
What this PR does / why we need it:
OCI (nodepools) implementation of cluster-autoscaler does not handle the scenarios where a node does not have a compute instance well. This can occur in the case of Limits Exceeded, Quota Exceeded or Out of Host Capacity in the region.
The autoscaler can end up in a bad state during these scenarios as the node without the compute instance can remain without being deleted and retries to delete this node continue to occur infinitely.
This PR fixes this issue, the node without the compute instance does get deleted.