diff --git a/keps/sig-storage/5280-fsuser-emphemeral-volumes/README.md b/keps/sig-storage/5280-fsuser-emphemeral-volumes/README.md new file mode 100644 index 00000000000..fcb6c110fcf --- /dev/null +++ b/keps/sig-storage/5280-fsuser-emphemeral-volumes/README.md @@ -0,0 +1,948 @@ + +# Allow `fsUser` field to be set in `PodSecurityContext` on ephemeral volumes + + + + +- [Release Signoff Checklist](#release-signoff-checklist) +- [Summary](#summary) +- [Motivation](#motivation) + - [Goals](#goals) + - [Non-Goals](#non-goals) +- [Proposal](#proposal) + - [User Stories (Optional)](#user-stories-optional) + - [Story 1](#story-1) + - [Story 2](#story-2) + - [Notes/Constraints/Caveats (Optional)](#notesconstraintscaveats-optional) + - [Risks and Mitigations](#risks-and-mitigations) +- [Design Details](#design-details) + - [Test Plan](#test-plan) + - [Prerequisite testing updates](#prerequisite-testing-updates) + - [Unit tests](#unit-tests) + - [Integration tests](#integration-tests) + - [e2e tests](#e2e-tests) + - [Graduation Criteria](#graduation-criteria) + - [Upgrade / Downgrade Strategy](#upgrade--downgrade-strategy) + - [Version Skew Strategy](#version-skew-strategy) +- [Production Readiness Review Questionnaire](#production-readiness-review-questionnaire) + - [Feature Enablement and Rollback](#feature-enablement-and-rollback) + - [Rollout, Upgrade and Rollback Planning](#rollout-upgrade-and-rollback-planning) + - [Monitoring Requirements](#monitoring-requirements) + - [Dependencies](#dependencies) + - [Scalability](#scalability) + - [Troubleshooting](#troubleshooting) +- [Implementation History](#implementation-history) +- [Drawbacks](#drawbacks) +- [Alternatives](#alternatives) +- [Infrastructure Needed (Optional)](#infrastructure-needed-optional) + + +## Release Signoff Checklist + + + +Items marked with (R) are required *prior to targeting to a milestone / release*. + +- [ ] (R) Enhancement issue in release milestone, which links to KEP dir in [kubernetes/enhancements] (not the initial KEP PR) +- [ ] (R) KEP approvers have approved the KEP status as `implementable` +- [ ] (R) Design details are appropriately documented +- [ ] (R) Test plan is in place, giving consideration to SIG Architecture and SIG Testing input (including test refactors) + - [ ] e2e Tests for all Beta API Operations (endpoints) + - [ ] (R) Ensure GA e2e tests meet requirements for [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md) + - [ ] (R) Minimum Two Week Window for GA e2e tests to prove flake free +- [ ] (R) Graduation criteria is in place + - [ ] (R) [all GA Endpoints](https://github.com/kubernetes/community/pull/1806) must be hit by [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md) +- [ ] (R) Production readiness review completed +- [ ] (R) Production readiness review approved +- [ ] "Implementation History" section is up-to-date for milestone +- [ ] User-facing documentation has been created in [kubernetes/website], for publication to [kubernetes.io] +- [ ] Supporting documentation—e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes + + + +[kubernetes.io]: https://kubernetes.io/ +[kubernetes/enhancements]: https://git.k8s.io/enhancements +[kubernetes/kubernetes]: https://git.k8s.io/kubernetes +[kubernetes/website]: https://git.k8s.io/website + +## Summary + + + +This KEP proposes adding a global fsUser field to Kubernetes' `PodSecurityContext` +to declaratively set user ownership (UID) for volumes, complementing the existing +`fsGroup` (GID) functionality. It eliminates the need for manual ownership management +via init containers while maintaining backward compatibility. + +## Motivation + + + +Kubernetes already supports controlling group ownership via the `FsGroup` field. However, +file ownership is a two-part concept: UID and GID. The introduction of `FsUser` would mirror +`FsGroup`’s functionality, providing a symmetric and complete security context. + +In some scenarios, users need to set the UID of a volume to a specific value in order to +meet application security requirements. However, Kubernetes currently lacks a straightforward +mechanism to set the `FsUser` value for volumes as part of the pod’s `PodSecurityContext`. + +Currently, there are a few roundabout ways users attempt to set the UID of volumes: + +* Using `initContainers` to manually change the owner of the volume after it is mounted. +While functional, this approach adds complexity and [not all scenarios are compatible] with +`initContainers`. This also does not work very well in a situation where the secretes are +managed by an automated tool and change over time. + +* [Copying a mounted secret] to a file with a desirable UID. For example, users might copy +a mounted secret to a new file with the correct ownership, since secrets are mounted as +read-only by default, making direct changes impossible. This workaround increases operational +complexity and introduces potential security risks. + +These methods complicate the configuration of pods, making them less user-friendly and more error-prone. + +By allowing users to directly set the `FsUser` value in the `PodSecurityContext`, this +enhancement simplifies the process of managing file system ownership for Kubernetes pods, +resulting in cleaner, more maintainable pod specifications. + +[Copying mounted secret]: https://stackoverflow.com/questions/49945437/changing-default-file-owner-and-group-owner-of-kubernetes-secrets-files-mounted/50426726#50426726 +[not all scenarios are compatible]: https://github.com/kubernetes/kubernetes/issues/81089#issuecomment-611923765 + +--- + +Existing issues: + +- [Allow setting ownership on mounted secrets](https://github.com/kubernetes/kubernetes/issues/81089) +- [implement fsUser option for securityContext](https://github.com/kubernetes/kubernetes/issues/119507) +- [Choose permissions on secret mounted volume](https://github.com/kubernetes/kubernetes/issues/82263) + +### Goals + + + +* Enable UID-based ownership control for ephemeral volumes at the pod level +* Maintain consistency with `fsGroup` behavior for permission masks and recursive ownership changes + +### Non-Goals + + + +* Enable UID-based ownership control for persistent volumes + +## Proposal + + + +**API Changes** +* Add `fsUser` int64 field to `PodSecurityContext` in core/v1 API +* Extend validation to ensure UID ranges 0-maxInt32 +* Update CRI API to propagate `fsUser` to runtime + +**Implementation Changes** + +* Modify `SetVolumeOwnership` in kubelet's volume subsystem to apply UID via `os.Lchown` +* Introduce `userRwMask` (0600) and `userRoMask` (0400) for permission adjustment when `fsUser` is set +* Handle symlink ownership in `AtomicWriter` for secret/configMap volumes + + + +### User Stories (Optional) + + + +#### Story 1 + +As a cluster user, I want to create a pod with a volume that has a specific UID, +so that I can meet application security requirements. +I want to do this without using `initContainers` since I utilize automated tools +for secret distribution and management and in this case, `initContainers` are not +a great option. + +--- + +Case examples [this] and [this comment] on an issue. + +[this]: https://github.com/kubernetes/kubernetes/issues/81089#issuecomment-611923765 +[comment on an issue]: https://github.com/kubernetes/kubernetes/issues/81089#issuecomment-585761850 + +### Notes/Constraints/Caveats (Optional) + + + +### Risks and Mitigations + + + +Since the implementation is based on the existing `fsGroup` functionality, the risks +match those of `fsGroup`: + +*Performance Impact*: Recursive `chown` operations on large volumes. +* `Mitigation`: Use `fsGroupChangePolicy: OnRootMismatch` + + +## Design Details + + + +**API** + +* Add a new optional field `fsUser` to `PodSecurityContext` that allows users to +specify the UID of the volume. Add validation to ensure that the UID is a valid +number. + +* Update the CRI API to propagate the `fsUser` field to the container runtime. + * `FSUser *int64 'json:"fsUser,omitempty" protobuf:"varint,14,opt,name=fsUser"'` + +**VolumeOwnership** + +* Update the `NewVolumeOwnership` function to accept the `fsUser` field and +initialize the `fsUser` field in the `VolumeOwnership` struct. +* Update all methods that use `fsGroup` to also use `fsUser` where applicable. +This includes `skipPermissionChange`, `requiresPermissionChange`, etc. +* Update `ChangePermissions` to continue with execution when either of `fsGroup` or +`fsUser` is set. +* Update `changeFilePermission` to pass the `fsUser` field to the `os.Lchown` function. +But since now one of the fields can be nil and `os.Lchown` would panic in that case, +it is necessary to introduce a new helper function to be used to handle the `fsGroup` +and `fsUser` fields: + ```go + func getFsValue(fsValue *int64) int { + if fsValue != nil { + return int(*fsValue) + } + return -1 + } + ``` +* Add additional masks for user read/write permissions for situations where only +the `fsUser` field is provided. + * `userRwMask` (0600) - for read/write permissions + * `userRoMask` (0400) - for read-only permissions + * `userExecMask` (0100) - for execute permissions + * the original masks to be renamed to `groupRwMask` (0660) and `groupRoMask` (0440) +* Add a function to choose the correct mask based on the combination of `fsGroup` and `fsUser`: + ```go + func determineMask(fsUser, fsGroup int, readonly bool) os.FileMode { + var mask os.FileMode + + switch { + case fsGroup != -1: + mask = groupRwMask + case fsUser != -1: + mask = userRwMask + default: + // Neither specified - this shouldn't happen due to the check + // at the start of ChangeOwnership + return 0 + } + + if readonly { + if fsUser != -1 && fsGroup == -1 { + mask = userRoMask + } else { + mask = groupRoMask + } + } + + return mask + } + ``` + The combinations are: + | **FsUser** | **FsGroup** | **Readonly** | **Mask** | + |:----------:|:-----------:|:------------:|:-----------------:| + | not set | set | true | groupRoMask (440) | + | not set | set | false | groupRwMask (660) | + | set | not set | true | userRoMask (400) | + | set | not set | false | userRwMask (600) | + | set | set | true | groupRoMask (440) | + | set | set | false | groupRwMask (660) | +* The `setuid` bit should be set if `fsUser` is set. + +### Test Plan + + + +[ ] I/we understand the owners of the involved components may require updates to +existing tests to make this code solid enough prior to committing the changes necessary +to implement this enhancement. + +##### Prerequisite testing updates + + + +##### Unit tests + + + + + +- ``: `` - `` + +##### Integration tests + + + + + +- : + +##### e2e tests + + + +- : + +### Graduation Criteria + + + +### Upgrade / Downgrade Strategy + + + +### Version Skew Strategy + + + +## Production Readiness Review Questionnaire + + + +### Feature Enablement and Rollback + + + +###### How can this feature be enabled / disabled in a live cluster? + + + +- [ ] Feature gate (also fill in values in `kep.yaml`) + - Feature gate name: + - Components depending on the feature gate: +- [ ] Other + - Describe the mechanism: + - Will enabling / disabling the feature require downtime of the control + plane? + - Will enabling / disabling the feature require downtime or reprovisioning + of a node? + +###### Does enabling the feature change any default behavior? + + + +###### Can the feature be disabled once it has been enabled (i.e. can we roll back the enablement)? + + + +###### What happens if we reenable the feature if it was previously rolled back? + +###### Are there any tests for feature enablement/disablement? + + + +### Rollout, Upgrade and Rollback Planning + + + +###### How can a rollout or rollback fail? Can it impact already running workloads? + + + +###### What specific metrics should inform a rollback? + + + +###### Were upgrade and rollback tested? Was the upgrade->downgrade->upgrade path tested? + + + +###### Is the rollout accompanied by any deprecations and/or removals of features, APIs, fields of API types, flags, etc.? + + + +### Monitoring Requirements + + + +###### How can an operator determine if the feature is in use by workloads? + + + +###### How can someone using this feature know that it is working for their instance? + + + +- [ ] Events + - Event Reason: +- [ ] API .status + - Condition name: + - Other field: +- [ ] Other (treat as last resort) + - Details: + +###### What are the reasonable SLOs (Service Level Objectives) for the enhancement? + + + +###### What are the SLIs (Service Level Indicators) an operator can use to determine the health of the service? + + + +- [ ] Metrics + - Metric name: + - [Optional] Aggregation method: + - Components exposing the metric: +- [ ] Other (treat as last resort) + - Details: + +###### Are there any missing metrics that would be useful to have to improve observability of this feature? + + + +### Dependencies + + + +###### Does this feature depend on any specific services running in the cluster? + + + +### Scalability + + + +###### Will enabling / using this feature result in any new API calls? + + + +###### Will enabling / using this feature result in introducing new API types? + + + +###### Will enabling / using this feature result in any new calls to the cloud provider? + + + +###### Will enabling / using this feature result in increasing size or count of the existing API objects? + + + +###### Will enabling / using this feature result in increasing time taken by any operations covered by existing SLIs/SLOs? + + + +###### Will enabling / using this feature result in non-negligible increase of resource usage (CPU, RAM, disk, IO, ...) in any components? + + + +###### Can enabling / using this feature result in resource exhaustion of some node resources (PIDs, sockets, inodes, etc.)? + + + +### Troubleshooting + + + +###### How does this feature react if the API server and/or etcd is unavailable? + +###### What are other known failure modes? + + + +###### What steps should be taken if SLOs are not being met to determine the problem? + +## Implementation History + + + +## Drawbacks + + + +## Alternatives + + + +## Infrastructure Needed (Optional) + + diff --git a/keps/sig-storage/5280-fsuser-emphemeral-volumes/kep.yaml b/keps/sig-storage/5280-fsuser-emphemeral-volumes/kep.yaml new file mode 100644 index 00000000000..622f61a1e24 --- /dev/null +++ b/keps/sig-storage/5280-fsuser-emphemeral-volumes/kep.yaml @@ -0,0 +1,28 @@ +title: Allow fsUser field to be set in PodSecurityContext on ephemeral volumes +kep-number: 5280 +authors: + - "@lamabro23" +owning-sig: sig-storage +participating-sigs: +status: provisional +creation-date: 2024-05-06 +reviewers: + - TBD +approvers: + - TBD +editor: TBD +see-also: +replaces: +stage: alpha +latest-milestone: "xxx" +milestone: + alpha: "xxx" + beta: "xxx" + stable: "xxx" +feature-gates: + - name: AllowFsUserEphemeralVolumes + components: + - kube-apiserver + - kubelet +disable-supported: true +metrics: