-
Notifications
You must be signed in to change notification settings - Fork 549
Enable kubernetes_node_scale benchmark (up to 5k nodes) on AWS EKS with Karpenter #6512
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Changes from all commits
214e6f0
e0b33ec
3519bf3
5577dee
5869a7e
ed94853
684215e
568ba1a
d14b242
f76985c
c72880e
cea09dd
4e51710
6c7542f
c834c95
711893a
7e930f9
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -233,6 +233,19 @@ | |
| 'Whether to install AWS Load Balancer Controller in EKS Karpenter clusters' | ||
| 'Default value - do not install unless explicitly requested', | ||
| ) | ||
| flags.DEFINE_integer( | ||
| 'eks_karpenter_limits_vcpu_per_node', | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Use flagholder: https://absl.readthedocs.io/en/latest/absl.flags.html#absl.flags.FlagHolder Also not sure if this is generally the right spot for these. Ideally probably both should go in config_overrides, with this one maybe being set cpu size from vm_spec & the other coming in a follow up cl. |
||
| 2, | ||
| 'Assumed vCPUs per provisioned node when computing Karpenter NodePool ' | ||
| 'limits.cpu on EKS (uses kubernetes_scale_num_nodes, this value, and 5% ' | ||
| 'headroom; minimum limit 1000). Raise for larger EC2 instance shapes.', | ||
| ) | ||
| flags.DEFINE_list( | ||
| 'eks_karpenter_nodepool_instance_types', | ||
| [], | ||
| 'Comma-separated EC2 types for the Karpenter default NodePool (worker ' | ||
| 'nodes only). Empty keeps instance-category/generation in the template.', | ||
| ) | ||
| AWS_CAPACITY_BLOCK_RESERVATION_ID = flags.DEFINE_string( | ||
| 'aws_capacity_block_reservation_id', | ||
| None, | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we just not specify anything & let Karpenter decide? Or is this indeed necessary? It seems clever but a little annoying / bad user experience by Karpenter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are the resources for the Karpenter controller pod (the node where Karpenter itself runs). Karpenter doesn’t manage that node, so it can’t “decide” these values, we have to set them. For runs with ~10 nodes, 1/1Gi is sufficient; we only increase when node_scale is 500+ or 1000+.