-
Notifications
You must be signed in to change notification settings - Fork 214
added recommended resource conf to kubeauthproxy #2952
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughResource specifications are updated in two Kubernetes deployment templates for the kube-auth-proxy components. Container CPU requests increase from 10m to 500m, memory requests increase from 32Mi to 128Mi, CPU limits are removed, and memory limits increase from 64Mi to 128Mi across both deployments. Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~5 minutes
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
internal/controller/services/gateway/resources/kube-auth-proxy-oauth-deployment.tmpl.yaml (1)
68-73: Add inline documentation for the resource configuration rationale.If these values are officially recommended by upstream or your organization's standards, add a brief comment explaining the source and rationale. This helps future maintainers understand the design decision.
resources: requests: cpu: "500m" # Recommended by RHOAIENG-40618; allows burst handling memory: "128Mi" # Based on profiling; monitor for OOM events limits: memory: "128Mi" # No CPU limit; see RHOAIENG-40618 for rationale
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
internal/controller/services/gateway/resources/kube-auth-proxy-oauth-deployment.tmpl.yaml(1 hunks)internal/controller/services/gateway/resources/kube-auth-proxy-oidc-deployment.tmpl.yaml(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Build/push catalog image
- GitHub Check: golangci-lint
- GitHub Check: kube-linter
- GitHub Check: Run tests and collect coverage on internal and pkg
🔇 Additional comments (1)
internal/controller/services/gateway/resources/kube-auth-proxy-oauth-deployment.tmpl.yaml (1)
68-73: Verify justification for substantial resource increases and CPU limit removal.The resource specifications show increased CPU and memory requests with CPU limits removed entirely. The PR description lacks detailed rationale for these values, and E2E testing updates were reportedly skipped.
Specific concerns:
- No documented rationale in the PR description—are these based on upstream recommendations, performance testing, or cluster policies?
- Removing CPU limits allows unbounded consumption. Confirm this aligns with your cluster resource policies and namespace quotas.
- Memory limit equals memory request, leaving no headroom for temporary spikes; any overage triggers pod eviction.
- E2E test updates were skipped. Resource changes warrant testing to catch scheduling or performance regressions.
Please provide:
- Link to or excerpt from JIRA RHOAIENG-40618 documenting the resource values as "recommended".
- Confirmation that removing CPU limits aligns with your cluster resource policies.
- Evidence of testing that validates these resource values are sufficient for the oauth2-proxy workload.
Description
JIRA: RHOAIENG-40618
How Has This Been Tested?
Screenshot or short clip
Merge criteria
E2E test suite update requirement
When bringing new changes to the operator code, such changes are by default required to be accompanied by extending and/or updating the E2E test suite accordingly.
To opt-out of this requirement:
E2E update requirement opt-out justificationsection belowE2E update requirement opt-out justification
no need
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.