-
Notifications
You must be signed in to change notification settings - Fork 334
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
More faqs #83
More faqs #83
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -5,6 +5,10 @@ | |
- [I updated provisioner configuration but volumes are not discovered](#i-updated-provisioner-configuration-but-volumes-are-not-discovered) | ||
- [I bind mounted a directory into sub-directory of discovery directory, but no PVs created](#i-bind-mounted-a-directory-into-sub-directory-of-discovery-directory-but-no-pvs-created) | ||
- [Failed to start when docker --init flag is enabled.](#failed-to-start-when-docker---init-flag-is-enabled) | ||
- [Can I clean volume data by deleting PV object?](#can-i-clean-volume-data-by-deleting-pv-object) | ||
- [PV with delete reclaimPolicy is released but not going to be reclaimed](#pv-with-delete-reclaimpolicy-is-released-but-not-going-to-be-reclaimed) | ||
- [Why my application uses an empty volume when node gets recreated in GCP](#why-my-application-uses-an-empty-volume-when-node-gets-recreated-in-gcp) | ||
- [Can I change storage class name after some volumes has been provisioned](#can-i-change-storage-class-name-after-some-volumes-has-been-provisioned) | ||
|
||
## I updated provisioner configuration but volumes are not discovered | ||
|
||
|
@@ -49,3 +53,75 @@ Workarounds before the fix is released: | |
- do not use docker `--init`, packs [tini](https://github.com/krallin/tini) in your docker image | ||
- do not use docker `--init`, [share process namespace](https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/) in your pod and use [pause:3.1+](https://github.com/kubernetes/kubernetes/blob/master/build/pause/CHANGELOG.md) to clean up orphaned zombie processes | ||
- do not mount `/dev`, provisioner will discover current mounted devices, but it cannot discovery the newly mounted (see [why](https://github.com/kubernetes-incubator/external-storage/issues/783#issuecomment-395013458)) | ||
|
||
## Can I clean volume data by deleting PV object? | ||
|
||
No, there is no reliable mechanism in provisioner to detect the PV object of | ||
volume is deleted by the user or not created yet. This is because delete event | ||
will not be delivered when provisioner is not running and provisioner don't | ||
know whether the volume has been discovered before. | ||
|
||
So provisioner will always discover volume as a new PV if no existing PV is | ||
associated with it. The volume data will not be cleaned in this phrase, and old | ||
data in it may leak other applications. | ||
|
||
So you must not delete PV objects by yourself, always delete PVC objects and | ||
set `PersistentVolumeReclaimPolicy` to `Delete` to clean volume data of | ||
associated PVs. | ||
|
||
## PV with delete reclaimPolicy is released but not going to be reclaimed | ||
|
||
At first, please check provisioner is running on the node. If provisioner is | ||
running, please check whether the volume exists on the node. If the volume is | ||
missing, provisioner can not clean the volume data. For safety, it will not | ||
clean the associated PV object. This is to prevent old volume from being used | ||
by other programs if it recovers later. | ||
|
||
It’s up to the system administrator to fix this: | ||
|
||
- If the volume has been decommissioned, you can delete the PV object manually | ||
(e.g. `kubectl delete <pv-name>`) | ||
- If the volume is missing because of the invalid `/etc/fstab`, setup script or | ||
hardware failures, please fix the volume. If the volume can be recovered, | ||
provisioner will continue to clean the volume data and reclaim the PV object. | ||
If the volume cannot be recovered, you can remove the volume (or disk) from | ||
the node and delete the PV object manually. | ||
|
||
Of course, on a specific platform if you have a reliable mechanism to detect if | ||
a volume is permanently deleted or cannot recover, you can write an operator or | ||
sidecar to automate this. | ||
|
||
## Why my application uses an empty volume when node gets recreated in GCP | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. |
||
|
||
Please check `spec.local.path` field of local PV object. If it is a non-unique | ||
path (e.g. without UUID in it) e.g. `/mnt/disks/ssd0`, newly created disks may be | ||
mounted at the same path. | ||
|
||
For example, in GKE when nodes get recreated, local SSDs will be recreated and | ||
mounted at the same paths (note: `--keep-disks` cannot be used to keep local | ||
SSDs because `autoDelete` cannot be set to false on local SSDs). Pods using old | ||
volumes will start with empty volumes because paths of PV objects will get | ||
mounted with newly created disks. | ||
|
||
If your application does not expect this behavior, you should use | ||
[`--local-ssd-volumes`](https://cloud.google.com/sdk/gcloud/reference/alpha/container/node-pools/create) | ||
and configure provisioner to discover volumes under `/mnt/disks/by-uuid/google-local-ssds-scsi-fs` or | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This example contains that config: https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/helm/generated_examples/gce.yaml There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. A link to this example is added. |
||
`/mnt/disks/by-uuid/google-local-ssds-nvme-fs`. Here is [an | ||
example](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/helm/generated_examples/gce.yaml). | ||
|
||
This applies in other environments if local paths you configured are not | ||
stable. See our [operations guide](operations.md) and [best | ||
practices](best-practices.md) in production. | ||
|
||
## Can I change storage class name after some volumes has been provisioned | ||
|
||
Basically, you can't. When a discovery directory is configured in a storage | ||
class, it cannot be configured in another storage class, otherwise, volumes | ||
will be discovered again under different storage class. Pods which request PVs | ||
from different storage classes can mount the same volume. Once a directory is | ||
configured in a storage class, it's better to not change. | ||
|
||
For now, we don't support migrating volumes to another storage class. If you | ||
really need to do this, the only way is to clean all volumes under old | ||
storage class, configure discovery directory under new storage class and | ||
restart all provisioners. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kubernetes-retired/external-storage#1052 and #6