-
Notifications
You must be signed in to change notification settings - Fork 4k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #7494 from adrianmoisey/readme_updates
VPA - Update docs a little
- Loading branch information
Showing
6 changed files
with
103 additions
and
78 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -31,11 +31,11 @@ | |
# Intro | ||
|
||
Vertical Pod Autoscaler (VPA) frees users from the necessity of setting | ||
up-to-date resource limits and requests for the containers in their pods. When | ||
up-to-date resource requests for the containers in their pods. When | ||
configured, it will set the requests automatically based on usage and thus | ||
allow proper scheduling onto nodes so that appropriate resource amount is | ||
available for each pod. It will also maintain ratios between limits and | ||
requests that were specified in initial containers configuration. | ||
available for each pod. It will also maintain ratios between requests and | ||
limits that were specified in initial containers configuration. | ||
|
||
It can both down-scale pods that are over-requesting resources, and also | ||
up-scale pods that are under-requesting resources based on their usage over | ||
|
@@ -74,6 +74,7 @@ The current default version is Vertical Pod Autoscaler 1.2.1 | |
| 0.3.X and lower | 1.7+ | | ||
|
||
### Notice on CRD update (>=1.0.0) | ||
|
||
**NOTE:** In version 1.0.0, we have updated the CRD definition and added RBAC for the | ||
status resource. If you are upgrading from version (<=0.14.0), you must update the CRD | ||
definition and RBAC. | ||
|
@@ -115,24 +116,26 @@ and a ConfigUnsupported condition will be set on them. | |
|
||
This doc is for installing latest VPA. For instructions on migration from older versions see [Migration Doc](https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/MIGRATE.md). | ||
|
||
|
||
### Prerequisites | ||
|
||
* `kubectl` should be connected to the cluster you want to install VPA. | ||
* The metrics server must be deployed in your cluster. Read more about [Metrics Server](https://github.com/kubernetes-incubator/metrics-server). | ||
* If you are using a GKE Kubernetes cluster, you will need to grant your current Google | ||
- `kubectl` should be connected to the cluster you want to install VPA. | ||
- The metrics server must be deployed in your cluster. Read more about [Metrics Server](https://github.com/kubernetes-sigs/metrics-server). | ||
- If you are using a GKE Kubernetes cluster, you will need to grant your current Google | ||
identity `cluster-admin` role. Otherwise, you won't be authorized to grant extra | ||
privileges to the VPA system components. | ||
|
||
```console | ||
$ gcloud info | grep Account # get current google identity | ||
Account: [[email protected]] | ||
|
||
$ kubectl create clusterrolebinding myname-cluster-admin-binding --clusterrole=cluster-admin [email protected] | ||
Clusterrolebinding "myname-cluster-admin-binding" created | ||
``` | ||
* If you already have another version of VPA installed in your cluster, you have to tear down | ||
|
||
- If you already have another version of VPA installed in your cluster, you have to tear down | ||
the existing installation first with: | ||
``` | ||
|
||
```console | ||
./hack/vpa-down.sh | ||
``` | ||
|
||
|
@@ -141,14 +144,13 @@ This doc is for installing latest VPA. For instructions on migration from older | |
To install VPA, please download the source code of VPA (for example with `git clone https://github.com/kubernetes/autoscaler.git`) | ||
and run the following command inside the `vertical-pod-autoscaler` directory: | ||
|
||
``` | ||
```console | ||
./hack/vpa-up.sh | ||
``` | ||
|
||
Note: the script currently reads environment variables: `$REGISTRY` and `$TAG`. | ||
Make sure you leave them unset unless you want to use a non-default version of VPA. | ||
|
||
|
||
Note: If you are seeing following error during this step: | ||
``` | ||
unknown option -addext | ||
|
@@ -165,7 +167,7 @@ with the API server. | |
To print YAML contents with all resources that would be understood by | ||
`kubectl diff|apply|...` commands, you can use | ||
|
||
``` | ||
```console | ||
./hack/vpa-process-yamls.sh print | ||
``` | ||
|
||
|
@@ -181,27 +183,28 @@ each controller that you want to have automatically computed resource requiremen | |
This will be most commonly a **Deployment**. | ||
There are four modes in which *VPAs* operate: | ||
|
||
* `"Auto"`: VPA assigns resource requests on pod creation as well as updates | ||
- `"Auto"`: VPA assigns resource requests on pod creation as well as updates | ||
them on existing pods using the preferred update mechanism. Currently, this is | ||
equivalent to `"Recreate"` (see below). Once restart free ("in-place") update | ||
of pod requests is available, it may be used as the preferred update mechanism by | ||
the `"Auto"` mode. | ||
* `"Recreate"`: VPA assigns resource requests on pod creation as well as updates | ||
- `"Recreate"`: VPA assigns resource requests on pod creation as well as updates | ||
them on existing pods by evicting them when the requested resources differ significantly | ||
from the new recommendation (respecting the Pod Disruption Budget, if defined). | ||
This mode should be used rarely, only if you need to ensure that the pods are restarted | ||
whenever the resource request changes. Otherwise, prefer the `"Auto"` mode which may take | ||
advantage of restart-free updates once they are available. | ||
* `"Initial"`: VPA only assigns resource requests on pod creation and never changes them | ||
- `"Initial"`: VPA only assigns resource requests on pod creation and never changes them | ||
later. | ||
* `"Off"`: VPA does not automatically change the resource requirements of the pods. | ||
- `"Off"`: VPA does not automatically change the resource requirements of the pods. | ||
The recommendations are calculated and can be inspected in the VPA object. | ||
|
||
### Test your installation | ||
|
||
A simple way to check if Vertical Pod Autoscaler is fully operational in your | ||
cluster is to create a sample deployment and a corresponding VPA config: | ||
``` | ||
|
||
```console | ||
kubectl create -f examples/hamster.yaml | ||
``` | ||
|
||
|
@@ -212,17 +215,17 @@ VPA will observe the behaviour of the pods, and after about 5 minutes, they shou | |
updated with a higher CPU request | ||
(note that VPA does not modify the template in the deployment, but the actual requests | ||
of the pods are updated). To see VPA config and current recommended resource requests run: | ||
``` | ||
|
||
```console | ||
kubectl describe vpa | ||
``` | ||
|
||
|
||
*Note: if your cluster has little free capacity these pods may be unable to schedule. | ||
You may need to add more nodes or adjust examples/hamster.yaml to use less CPU.* | ||
|
||
### Example VPA configuration | ||
|
||
``` | ||
```yaml | ||
apiVersion: autoscaling.k8s.io/v1 | ||
kind: VerticalPodAutoscaler | ||
metadata: | ||
|
@@ -240,35 +243,39 @@ spec: | |
To diagnose problems with a VPA installation, perform the following steps: | ||
* Check if all system components are running: | ||
``` | ||
- Check if all system components are running: | ||
```console | ||
kubectl --namespace=kube-system get pods|grep vpa | ||
``` | ||
|
||
The above command should list 3 pods (recommender, updater and admission-controller) | ||
all in state Running. | ||
|
||
* Check if the system components log any errors. | ||
- Check if the system components log any errors. | ||
For each of the pods returned by the previous command do: | ||
``` | ||
|
||
```console | ||
kubectl --namespace=kube-system logs [pod name] | grep -e '^E[0-9]\{4\}' | ||
``` | ||
|
||
* Check that the VPA Custom Resource Definition was created: | ||
``` | ||
- Check that the VPA Custom Resource Definition was created: | ||
|
||
```console | ||
kubectl get customresourcedefinition | grep verticalpodautoscalers | ||
``` | ||
|
||
### Components of VPA | ||
|
||
The project consists of 3 components: | ||
|
||
* [Recommender](https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/pkg/recommender/README.md) - it monitors the current and past resource consumption and, based on it, | ||
- [Recommender](https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/pkg/recommender/README.md) - monitors the current and past resource consumption and, based on it, | ||
provides recommended values for the containers' cpu and memory requests. | ||
|
||
* [Updater](https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/pkg/updater/README.md) - it checks which of the managed pods have correct resources set and, if not, | ||
- [Updater](https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/pkg/updater/README.md) - checks which of the managed pods have correct resources set and, if not, | ||
kills them so that they can be recreated by their controllers with the updated requests. | ||
|
||
* [Admission Plugin](https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/pkg/admission-controller/README.md) - it sets the correct resource requests on new pods (either just created | ||
- [Admission Plugin](https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/pkg/admission-controller/README.md) - sets the correct resource requests on new pods (either just created | ||
or recreated by their controller due to Updater's activity). | ||
|
||
More on the architecture can be found [HERE](https://github.com/kubernetes/design-proposals-archive/blob/main/autoscaling/vertical-pod-autoscaler.md). | ||
|
@@ -281,19 +288,23 @@ will get resources as defined in your controllers (i.e. deployment or | |
replicaset) and not according to previous recommendations made by VPA. | ||
|
||
To stop using Vertical Pod Autoscaling in your cluster: | ||
* If running on GKE, clean up role bindings created in [Prerequisites](#prerequisites): | ||
``` | ||
|
||
- If running on GKE, clean up role bindings created in [Prerequisites](#prerequisites): | ||
|
||
```console | ||
kubectl delete clusterrolebinding myname-cluster-admin-binding | ||
``` | ||
* Tear down VPA components: | ||
``` | ||
|
||
- Tear down VPA components: | ||
|
||
```console | ||
./hack/vpa-down.sh | ||
``` | ||
|
||
# Limits control | ||
|
||
When setting limits VPA will conform to | ||
[resource policies](https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1/types.go#L100). | ||
[resource policies](https://github.com/kubernetes/autoscaler/blob/vertical-pod-autoscaler-1.2.1/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1/types.go#L95-L103). | ||
It will maintain limit to request ratio specified for all containers. | ||
|
||
VPA will try to cap recommendations between min and max of | ||
|
@@ -337,20 +348,22 @@ For example you could have 3 profiles: [frugal](deploy/recommender-deployment-lo | |
use different TargetCPUPercentile (50, 90 and 95) to calculate their recommendations. | ||
|
||
Please note the usage of the following arguments to override default names and percentiles: | ||
|
||
- --recommender-name=performance | ||
- --target-cpu-percentile=0.95 | ||
|
||
You can then choose which recommender to use by setting `recommenders` inside the `VerticalPodAutoscaler` spec. | ||
|
||
|
||
### Custom memory bump-up after OOMKill | ||
|
||
After an OOMKill event was observed, VPA increases the memory recommendation based on the observed memory usage in the event according to this formula: `recommendation = memory-usage-in-oomkill-event + max(oom-min-bump-up-bytes, memory-usage-in-oomkill-event * oom-bump-up-ratio)`. | ||
You can configure the minimum bump-up as well as the multiplier by specifying startup arguments for the recommender: | ||
`oom-bump-up-ratio` specifies the memory bump up ratio when OOM occurred, default is `1.2`. This means, memory will be increased by 20% after an OOMKill event. | ||
`oom-min-bump-up-bytes` specifies minimal increase of memory after observing OOM. Defaults to `100 * 1024 * 1024` (=100MiB) | ||
|
||
Usage in recommender deployment | ||
``` | ||
|
||
```yaml | ||
containers: | ||
- name: recommender | ||
args: | ||
|
@@ -365,70 +378,75 @@ you probably want the CPU recommendation to be an integer. A dedicated recommend | |
To activate this feature, pass the flag `--cpu-integer-post-processor-enabled` when you start the recommender. | ||
The pre-processor only acts on containers having a specific configuration. This configuration consists in an annotation on your VPA object for each impacted container. | ||
The annotation format is the following: | ||
|
||
``` | ||
vpa-post-processor.kubernetes.io/{containerName}_integerCPU=true | ||
``` | ||
|
||
### Controlling eviction behavior based on scaling direction and resource | ||
To limit disruptions caused by evictions, you can put additional constraints on the Updater's eviction behavior by specifying `.updatePolicy.EvictionRequirements` in the VPA spec. An `EvictionRequirement` contains a resource and a `ChangeRequirement`, which is evaluated by comparing a new recommendation against the currently set resources for a container. | ||
Here is an example configuration which allows evictions only when CPU or memory get scaled up, but not when they both are scaled down | ||
``` | ||
|
||
To limit disruptions caused by evictions, you can put additional constraints on the Updater's eviction behavior by specifying `.updatePolicy.EvictionRequirements` in the VPA spec. An `EvictionRequirement` contains a resource and a `ChangeRequirement`, which is evaluated by comparing a new recommendation against the currently set resources for a container | ||
|
||
Here is an example configuration which allows evictions only when CPU or memory get scaled up, but not when they both are scaled down | ||
|
||
```yaml | ||
updatePolicy: | ||
evictionRequirements: | ||
- resources: ["cpu", "memory"] | ||
changeRequirement: TargetHigherThanRequests | ||
``` | ||
Note that this doesn't prevent scaling down entirely, as Pods may get recreated for different reasons, resulting in a new recommendation being applied. See [the original AEP](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler/enhancements/4831-control-eviction-behavior) for more context and usage information. | ||
``` | ||
|
||
Note that this doesn't prevent scaling down entirely, as Pods may get recreated for different reasons, resulting in a new recommendation being applied. See [the original AEP](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler/enhancements/4831-control-eviction-behavior) for more context and usage information. | ||
|
||
### Limiting which namespaces are used | ||
### Limiting which namespaces are used | ||
|
||
By default the VPA will run against all namespaces. You can limit that behaviour by setting the following options: | ||
|
||
1. `ignored-vpa-object-namespaces` - A comma separated list of namespaces to ignore | ||
1. `vpa-object-namespace` - A single namespace to monitor | ||
|
||
These options cannot be used together and are mutually exclusive. | ||
These options cannot be used together and are mutually exclusive. | ||
|
||
### Setting the webhook failurePolicy | ||
### Setting the webhook failurePolicy | ||
|
||
It is possible to set the failurePolicy of the webhook to `Fail` by passing `--webhook-failure-policy-fail=true` to the VPA admission controller. | ||
Please use this option with caution as it may be possible to break Pod creation if there is a failure with the VPA. | ||
It is possible to set the failurePolicy of the webhook to `Fail` by passing `--webhook-failure-policy-fail=true` to the VPA admission controller. | ||
Please use this option with caution as it may be possible to break Pod creation if there is a failure with the VPA. | ||
Using it in conjunction with `--ignored-vpa-object-namespaces=kube-system` or `--vpa-object-namespace` to reduce risk. | ||
|
||
# Known limitations | ||
|
||
* Whenever VPA updates the pod resources, the pod is recreated, which causes all | ||
- Whenever VPA updates the pod resources, the pod is recreated, which causes all | ||
running containers to be recreated. The pod may be recreated on a different | ||
node. | ||
* VPA cannot guarantee that pods it evicts or deletes to apply recommendations | ||
- VPA cannot guarantee that pods it evicts or deletes to apply recommendations | ||
(when configured in `Auto` and `Recreate` modes) will be successfully | ||
recreated. This can be partly | ||
addressed by using VPA together with [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#basics). | ||
* VPA does not update resources of pods which are not run under a controller. | ||
* Vertical Pod Autoscaler **should not be used with the [Horizontal Pod | ||
- VPA does not update resources of pods which are not run under a controller. | ||
- Vertical Pod Autoscaler **should not be used with the [Horizontal Pod | ||
Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-resource-metrics) | ||
(HPA) on the same resource metric (CPU or memory)** at this moment. However, you can use [VPA with | ||
HPA on separate resource metrics](https://github.com/kubernetes/autoscaler/issues/6247) (e.g. VPA | ||
on memory and HPA on CPU) as well as with [HPA on custom and external | ||
metrics](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#scaling-on-custom-metrics). | ||
* The VPA admission controller is an admission webhook. If you add other admission webhooks | ||
- The VPA admission controller is an admission webhook. If you add other admission webhooks | ||
to your cluster, it is important to analyze how they interact and whether they may conflict | ||
with each other. The order of admission controllers is defined by a flag on API server. | ||
* VPA reacts to most out-of-memory events, but not in all situations. | ||
* VPA performance has not been tested in large clusters. | ||
* VPA recommendation might exceed available resources (e.g. Node size, available | ||
- VPA reacts to most out-of-memory events, but not in all situations. | ||
- VPA performance has not been tested in large clusters. | ||
- VPA recommendation might exceed available resources (e.g. Node size, available | ||
size, available quota) and cause **pods to go pending**. This can be partly | ||
addressed by using VPA together with [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#basics). | ||
* Multiple VPA resources matching the same pod have undefined behavior. | ||
* Running the vpa-recommender with leader election enabled (`--leader-elect=true`) in a GKE cluster | ||
- Multiple VPA resources matching the same pod have undefined behavior. | ||
- Running the vpa-recommender with leader election enabled (`--leader-elect=true`) in a GKE cluster | ||
causes contention with a lease called `vpa-recommender` held by the GKE system component of the | ||
same name. To run your own VPA in GKE, make sure to specify a different lease name using | ||
`--leader-elect-resource-name=vpa-recommender-lease` (or specify your own lease name). | ||
|
||
# Related links | ||
|
||
* [FAQ](FAQ.md) | ||
* [Design | ||
- [FAQ](FAQ.md) | ||
- [Design | ||
proposal](https://github.com/kubernetes/design-proposals-archive/blob/main/autoscaling/vertical-pod-autoscaler.md) | ||
* [API | ||
- [API | ||
definition](pkg/apis/autoscaling.k8s.io/v1/types.go) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.