Skip to content

kubectl delete should fail if a non namespaced resource is about to get deleted with --all #1308

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
ViktorSchlaffer opened this issue Oct 20, 2022 · 3 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@ViktorSchlaffer
Copy link

ViktorSchlaffer commented Oct 20, 2022

What happened:

An admin accidentally deleted all pv-s with the following command:
kubectl delete pv -n <some_namespace> --all

What you expected to happen:

Based on kubectl help, the expected behavior would be the following:

kubectl delete pv --all -> return an error / alternative behavior is to delete all pv, but in that case the help text for --all should be changed/
kubectl delete pv -n <namespace> --all -> return an error / or delete nothing, as we've scoped the request to a specific namespace
kubectl delete pv --all-namespaces --all -> delete all pv
kubectl delete pv -n <namespace> --all --all-namespaces -> delete all pv ( since --all-namespaces overrides -n according to the help)

Reasoning:
Cluster scoped resources are not namespaced.
The help for kubectl delete --help says the following about the --all , -A and -n parameters:

--all=false: Delete all resources, including uninitialized ones, in the namespace of the specified resource types.
-A, --all-namespaces=false: If present, list the requested object(s) across all namespaces. Namespace in current
context is ignored even if specified with --namespace.
-n, --namespace='': If present, the namespace scope for this CLI request

Therefore:
--all option alone shall not delete cluster scoped resources, as the help says that it deletes resources in a namespace, and cluster scoped resources are not namespaced.
Also, -n shall filter out all resources that are not in a given namespace. Cluster scoped resource is not in a namespace, so it should be filtered.

I know that kubectl delete is a GA command, and changing the behavior is a serious thing, but my proposal does not break valid scenarios, as a user is never supposed to scope to a namespace when deleting cluster scoped resources.
This proposal is valid for similar scenarios, like this similar issue that mentions accidental delete of namespace: #1056

How to reproduce it (as minimally and precisely as possible):
Example:
kubectl delete pv -n <namespace> --all

Anything else we need to know?:

Environment:

  • Kubernetes client and server versions (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.9", GitCommit:"b631974d68ac5045e076c86a5c66fba6f128dc72", GitTreeState:"clean", BuildDate:"2022-01-19T17:51:12Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.9", GitCommit:"b631974d68ac5045e076c86a5c66fba6f128dc72", GitTreeState:"clean", BuildDate:"2022-01-19T17:45:53Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"linux/amd64"}
@ViktorSchlaffer ViktorSchlaffer added the kind/bug Categorizes issue or PR as related to a bug. label Oct 20, 2022
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Oct 20, 2022
@k8s-ci-robot
Copy link
Contributor

@ViktorSchlaffer: This issue is currently awaiting triage.

SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@eddiezane
Copy link
Member

eddiezane commented Nov 9, 2022

We're hoping to address this via https://github.com/kubernetes/enhancements/tree/master/keps/sig-cli/3104-introduce-kuberc. You can see that and this thread for the history https://groups.google.com/g/kubernetes-dev/c/y4Q20V3dyOk.

/close

kubernetes/enhancements#3104

@k8s-ci-robot
Copy link
Contributor

@eddiezane: Closing this issue.

In response to this:

We're hoping to address this via https://github.com/kubernetes/enhancements/tree/master/keps/sig-cli/3104-introduce-kuberc. You can see that and this thread for the history https://groups.google.com/g/kubernetes-dev/c/y4Q20V3dyOk.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

3 participants