Skip to content

Commit

Permalink
More faqs
Browse files Browse the repository at this point in the history
  • Loading branch information
cofyc committed Apr 20, 2019
1 parent 0f25c4f commit 6377cdf
Show file tree
Hide file tree
Showing 2 changed files with 45 additions and 1 deletion.
44 changes: 44 additions & 0 deletions docs/faqs.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,3 +49,47 @@ Workarounds before the fix is released:
- do not use docker `--init`, packs [tini](https://github.com/krallin/tini) in your docker image
- do not use docker `--init`, [share process namespace](https://kubernetes.io/docs/tasks/configure-pod-container/share-process-namespace/) in your pod and use [pause:3.1+](https://github.com/kubernetes/kubernetes/blob/master/build/pause/CHANGELOG.md) to clean up orphaned zombie processes
- do not mount `/dev`, provisioner will discover current mounted devices, but it cannot discovery the newly mounted (see [why](https://github.com/kubernetes-incubator/external-storage/issues/783#issuecomment-395013458))

## It is possible to clean volume data by deleting PV object?

No, there is no reliable mechanism in provisioner to detect the PV object of
volume is deleted by the user or not created yet. This is because delete event
will not be delivered when provisioner is not running and provisioner don't
whether the volume has been discovered or not.

So provisioner will always discover volume as a new PV if no existing PV is
associated with it. The volume data will not be cleaned in this phrase and may
be leaked to other programs.

You must not delete PV objects by yourself, delete PVC objects and set
`PersistentVolumeReclaimPolicy` to `Delete` to clean volume data of associated
PVs.

## PV is released and reclaimPolicy is Delete but not reclaimed

At first, please check provisioner is running on the node. If provisioner is
running, please check whether the volume exists on the node. If the volume is
missing, provisioner can not clean the volume data. For safety, it will not
clean the associated PV object.

It’s up to the system administrator to fix this:

- If the volume has been decommissioned, you can delete the PV object manually
(e.g. `kubectl delete <pv-name>`) - If the volume is missing because of the
invalid `/etc/fstab`, setup script or hardware failures, please fix the volume.
If the volume can be recovered, provisioner will continue to clean the volume
data and reclaim the PV object. If the volume cannot be recovered, you can
remove the volume (or disk) from the node and delete the PV object manually.

## Can I change storage class name after some volumes has been provisioned

Basically, you can't. When a discovery directory is configured in a storage
class, it cannot be configured in another storage class, otherwise, volumes
will be discovered again under different storage class. Pods which request PVs
from different storage classes can mount the same volume. Once a directory is
configured in a storage class, it's better to not change.

For now, we don't support migrating volumes to another storage class. If you
really need to do this, the only way is to clean all volumes under old
storage class, configure discovery directory under new storage class and
restart all provisioners.
2 changes: 1 addition & 1 deletion docs/operations.md
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,7 @@ NOTE:

- Local PVs sharing one disk filesystem will have same capacity and will have
no capacity isolation. If you want to separate a disk into multiple PVs with
capacity isolation. You can separate disk into multiple
capacity isolation. You can [separate disk into multiple
partitions](#separate-disk-into-multiple-partitions) first.

### Link devices into directory to be discovered as block PVs
Expand Down

0 comments on commit 6377cdf

Please sign in to comment.