Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

📖 Namespace separation proposal #11691

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
297 changes: 297 additions & 0 deletions docs/proposals/20241127-namespace-separation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,297 @@
---
title: Multiple Instances of The Same Provider
authors:
- "@marek-veber"
reviewers:
- "@marek-veber"
creation-date: 2024-11-27
last-updated: 2025-01-15
status: provisional
see-also:
replaces:
superseded-by:
---

# Support running multiple instances of the same provider, each one watching different namespaces
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if possible, let's keep the title of the PR in sync with the title of the proposal ("namespace separation" is a little bit confusing awhen read without context).

Also wondering if the title should have something about multi-tenancy or sharding, but I'm not entirely sure at this stage


## Table of Contents

<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->

- [Summary](#summary)
- [Motivation](#motivation)
- [Goals](#goals)
- [Non-Goals/Future Work](#non-goalsfuture-work)
- [Proposal](#proposal)
- [A deployment example](#a-deployment-example)
- [Global resources:](#global-resources)
- [Namespace `capi1-system`](#namespace-capi1-system)
- [Namespace `capi2-system`](#namespace-capi2-system)
- [User Stories](#user-stories)
- [Story 1 - RedHat Hierarchical deployment using CAPI](#story-1---redhat-hierarchical-deployment-using-capi)
- [Functional Requirements](#functional-requirements)
- [FR1 - watch multiple namespaces](#fr1---watch-multiple-namespaces)
- [FR2 - watch on all namespaces excluding multiple namespaces](#fr2---watch-on-all-namespaces-excluding-multiple-namespaces)
- [Non-Functional Requirements](#non-functional-requirements)
- [Implementation Details/Notes/Constraints](#implementation-detailsnotesconstraints)
- [Current state:](#current-state)
- [Watch on multiple namespaces](#watch-on-multiple-namespaces)
- [Exclude watching on selected namespaces](#exclude-watching-on-selected-namespaces)
- [Security Model](#security-model)
- [Risks and Mitigations](#risks-and-mitigations)
- [Alternatives](#alternatives)
- [Upgrade Strategy](#upgrade-strategy)
- [Additional Details](#additional-details)
- [Test Plan](#test-plan)
- [Implementation History](#implementation-history)

<!-- END doctoc generated TOC please keep comment here to allow auto update -->

## Summary
We need to run multiple CAPI instances in one cluster and divide the namespaces to be watched by given instances.
Copy link
Contributor

@serngawy serngawy Jan 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lets add the ability to run a single CAPI instance that can watch group of namespaces OR exclude group of namespaces from been watched.


We want and consider:
- each CAPI instance:
- is running in separate namespace and is using its own service account
- can select by command the line arguments the list of namespaces:
- to watch - e.g.: `--namespace <ns1> --namespace <ns2>`
- to exclude from watching - e.g.: `--excluded-namespace <ns1> --excluded-namespace <ns2>`
- we are not supporting multiple versions of CAPI
- all running CAPI-instances:
- are using the same container image (same version of CAPI)
- are sharing global resources:
- CRDs:
- cluster.x-k8s.io:
- addons.cluster.x-k8s.io: clusterresourcesetbindings, clusterresourcesets
- cluster.x-k8s.io: clusterclasses, clusters, machinedeployments, machinehealthchecks, machinepools, machinesets, machines
- ipam.cluster.x-k8s.io: ipaddressclaims, ipaddresses
- runtime.cluster.x-k8s.io: extensionconfigs
- NOTE: the web-hooks are pointing from the CRDs into the first instance only
- the `ClusterRole/capi-aggregated-manager-role`
- the `ClusterRoleBinding/capi-manager-rolebinding` to bind all service accounts for CAPI instances (e.g. `capi1-system:capi-manager`, ..., `capiN-system:capi-manager`) to the `ClusterRole`
Comment on lines +54 to +72
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be it is me, but I struggle a little bit in understanding if this list is about a problem statement, or if it is about details of a solution or some constraint we have/were introduced by this proposal.

Frankly speaking, I would suggest to keep in the summary only a short sentence about the solution, and move those details down in the proposal, where possibly we will have some more context to explain why some decisions.


References:
* https://cluster-api.sigs.k8s.io/developer/core/support-multiple-instances

The proposed PRs implementing such a namespace separation:
* https://github.com/kubernetes-sigs/cluster-api/pull/11397 extend the commandline option `--namespace=<ns1, …>`
* https://github.com/kubernetes-sigs/cluster-api/pull/11370 add the new commandline option `--excluded-namespace=<ns1, …>`

## Motivation
Our motivation is to have a provisioning cluster which is provisioned cluster at the same time while using hierarchical structure of clusters.
Copy link
Contributor

@serngawy serngawy Jan 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Motivation:

For multi-tenant environment a cluster is used as provision-er using different CAPI providers.
Using CAPI requires careful consideration of namespace isolation to maintain security and operational boundaries between tenants. In such setups, it is essential to configure the CAPI controller instances to either watch or exclude specific groups of namespaces based on the isolation requirements. This can be achieved by setting up namespace-scoped controllers or applying filters, such as label selectors, to define the namespaces each instance should monitor. By doing so, administrators can ensure that the activities of one tenant do not interfere with others, while also reducing the resource overhead by limiting the scope of CAPI operations. This approach enhances scalability, security, and manageability, making it well-suited for environments with strict multi-tenancy requirements.

Copy link
Member

@fabriziopandini fabriziopandini Jan 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

might be we need a glossary section if we are introducing new terms like provisioning cluster, provisioned cluster, hierarchical structure of clusters.

Two namespaces are used by management cluster and the rest of namespaces are watched by CAPI manager to manage other managed clusters.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think these sentences could be a little clearer, i'm not fully understanding the motivation.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1
TBH this seems more a description of a solution (use a ns as a tenancy boundary), with some assumption how organizations should use this feature (two namespaces, why two?)

Let's please expand on the problem we are trying to solve in this paragraph, so we can then properly understand the solution down below


Our enhancement is also widely required many times from the CAPI community:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those are two issues from the same person, we can quote them as a reference, but I struggle a little bit to find them representative of other's member needs (one issue is stale, the other has just few comments)

FYI I'm aware of other persons occasionally asking about this, but ultimately no one stepped up to tackle this point before now.

Also, please note that our controllers today have a --watch-filter flag, as well there are issues like #7775, which are the expression of different need/requirement about how sharding of CAPI clusters should work.

Ideally this work should address both requirements, or at least it should be documented how this proposal does not prevent making progress on the other approach if and when someone will step up to work on it.

* https://github.com/kubernetes-sigs/cluster-api/issues/11192
* https://github.com/kubernetes-sigs/cluster-api/issues/11193

### Goals
We need to extend the existing feature to limit watching on specified namespace.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just to be clear this is extending the existing feature for supporting multiple instances of cluster-api?

Copy link
Contributor

@nrb nrb Jan 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's extending the existing --namespace option to allow more than 1 namespace to be watched, like L95 of this document.

We need to run multiple CAPI controller instances:
- each watching only specified namespaces: `capi1-system`, …, `capi$(N-1)-system`
- and the last resort instance to watch the rest of namespaces excluding the namespaces already watched by previously mentioned instances
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there prior art on cluster-api controllers watching multiple namespaces? (just curious)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://github.com/kubernetes-sigs/cluster-api/blob/main/main.go#L170-L171 shows that we can take in a single namespace, which eventually gets specified as a controller-runtime cache.Option.DefaultNamespaces option (https://github.com/kubernetes-sigs/cluster-api/blob/main/main.go#L346).

The documentation for that field (https://github.com/kubernetes-sigs/controller-runtime/blob/main/pkg/cache/cache.go#L182-L193) implies to me that multiple namespaces are supported, but I'm not familiar with the implementation, and I know that somewhat recently @sbueringer and @fabriziopandini put a lot of effort into making sure caching was optimized.

As I understand things right now, CAPI's controllers will watch either 1 namespace or all namespaces. Watching a number of namespaces between those two extremes is either not supported or well understood right now, based on my interpretation of what was said in the community meeting.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Watching a number of namespaces between those two extremes is either not supported or well understood right now, based on my interpretation of what was said in the community meeting.

ack, thanks @nrb, this is what i was wondering.


This change is only a small and strait forward update of the existing feature to limit watching on specified namespace by commandline `--namespace <ns>`

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets clarify the goal is to let the CAPI instance able to watch/exclude group of namespaces which will enhances the scalability, security, and manageability in multi tenant environment


### Non-Goals/Future Work
Non-goals:
* it's not necessary to work with the different versions of CRDs, we consider to:
* use same version of CAPi (the same container image):
* share the same CRDs
* the contract and RBAC need to be solved on specific provider (AWS, AZURE, ...)


## Proposal
We are proposing to:
* enable to select multiple namespaces: add `--namespace=<ns1, …>` to extend `--namespace=<ns>` to watch on selected namespaces
* the code change is only extending an existing hash with one item to multiple items
* the maintenance complexity shouldn't be extended here
* add the new commandline option `--excluded-namespace=<ens1, …>` to define list of excluded namespaces
* the code change is only setting an option `Cache.Options.DefaultFieldSelector` to disable matching with any of specified namespace's names
* the maintenance complexity shouldn't be extended a lot here

### A deployment example
Let's consider an example how to deploy multiple instances:

#### Global resources:
* CRDs (*.cluster.x-k8s.io) - webhooks will point into first instance, e.g.:
```yaml
spec:
conversion:
strategy: Webhook
webhook:
clientConfig:
service:
name: capi-webhook-service
namespace: capi1-system
path: /convert
```
* `ClusterRole/capi-aggregated-manager-role`
* the `ClusterRoleBinding/capi-manager-rolebinding` binding both service accounts `capi1-system:capi-manager` and `capi2-system:capi-manager` to the cluster role
```yaml
subjects:
- kind: ServiceAccount
name: capi-manager
namespace: capi2-system
- kind: ServiceAccount
name: capi-manager
namespace: capi1-system
```

#### Namespace `capi1-system`
* `ServiceAccount/capi-manager`
* `Role/capi-leader-election-role`, `RoleBinding/capi-leader-election-rolebinding`
* `MutatingWebhookConfiguration/capi-mutating-webhook-configuration`, `ValidatingWebhookConfiguration/capi-validating-webhook-configuration`
* `Service/capi-webhook-service`
* `Deployment/capi-controller-manager` with the extra args:
```yaml
containers:
- args:
- --namespace=rosa-a
- --namespace=rosa-b
```

#### Namespace `capi2-system`
* `ServiceAccount/capi-manager`
* `Role/capi-leader-election-role`, `RoleBinding/capi-leader-election-rolebinding`
* `MutatingWebhookConfiguration/capi-mutating-webhook-configuration`, `ValidatingWebhookConfiguration/capi-validating-webhook-configuration`
* `Service/capi-webhook-service`
* `Deployment/capi-controller-manager` with the extra args:
```yaml
containers:
- args:
- --excluded-namespace=rosa-a
- --excluded-namespace=rosa-b
```

### User Stories
We need to deploy two CAPI instances in the same cluster and divide the list of namespaces to assign some well known namespaces to be watched from the first instance and rest of them to assign to the second instace.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this user story helps me to understand the motivation a little better, it might be nice to have some of this language in that section too.


#### Story 1 - RedHat Hierarchical deployment using CAPI
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i don't think this is specific to Red Hat, it seems anyone who is doing this type of heirarchical deployment could gain benefit from this enhancement.

i do think it's nice to include the links to the Red Hat jiras.

Provisioning cluster which is also provisioned cluster at the same time while using hierarchical structure of clusters.
Two namespaces are used by management cluster and the rest of namespaces are watched by CAPI manager to manage other managed clusters.

RedHat Jira Issues:
* [ACM-15441](https://issues.redhat.com/browse/ACM-15441) - CAPI required enabling option for watching multiple namespaces,
* [ACM-14973](https://issues.redhat.com/browse/ACM-14973) - CAPI controllers should enabling option to ignore namespaces


#### Functional Requirements

##### FR1 - watch multiple namespaces
* It's possible to select a namespace using `--namespace <ns>` to select the namespace to watch now.
* We need to select multiple namespaces using `--namespace <ns1>` ... `--namespace <nsN>` to watch multiple namespaces.

##### FR2 - watch on all namespaces excluding multiple namespaces
* We need to selected excluded namespaces using `--excluded-namespace <ens1>` ... `--namespace <ensN>` to watch on all namespaces excluding selected namespaces.

#### Non-Functional Requirements

We consider that all CAPI instances:
- are using the same container image
- are sharing CRDs and ClusterRole

### Implementation Details/Notes/Constraints

There is a hash `watchNamespaces` of `cache.Config{}` objects [here](https://github.com/kubernetes-sigs/cluster-api/pull/11397/files#diff-2873f79a86c0d8b3335cd7731b0ecf7dd4301eb19a82ef7a1cba7589b5252261L319-R319):
```go
var watchNamespaces map[string]cache.Config
...
ctrlOptions := ctrl.Options{
...
Cache: cache.Options{
DefaultNamespaces: watchNamespaces,
....
}
```

#### Current state:
* when `watchNamespaces` == `nil` then all namespaces are watched
* when `watchNamespaces` == `map[string]cache.Config{ watchNamespace: {} }` then only namespace specified by `watchNamespace` is watched

#### Watch on multiple namespaces
We are suggesting to [update](https://github.com/kubernetes-sigs/cluster-api/pull/11397/files#diff-2873f79a86c0d8b3335cd7731b0ecf7dd4301eb19a82ef7a1cba7589b5252261R320-R323) `watchNamespace` to a `watchNamespacesList` list:
```go
if watchNamespacesList != nil {
watchNamespaces = map[string]cache.Config{}
for _, watchNamespace := range watchNamespacesList {
watchNamespaces[watchNamespace] = cache.Config{}
}
}
```
then the namespaces contained in `watchNamespacesList` will be watched by the instance.

#### Exclude watching on selected namespaces
1. We are suggesting to [create the fieldSelector condition](https://github.com/kubernetes-sigs/cluster-api/pull/11370/files#diff-2873f79a86c0d8b3335cd7731b0ecf7dd4301eb19a82ef7a1cba7589b5252261R331-R339) matching all namespaces excluding the list defined in `watchExcludedNamespaces`:
```go
var fieldSelector fields.Selector
if watchExcludedNamespaces != nil {
var conditions []fields.Selector
for i := range watchExcludedNamespaces {
conditions = append(conditions, fields.OneTermNotEqualSelector("metadata.namespace", watchExcludedNamespaces[i]))
}
fieldSelector = fields.AndSelectors(conditions...)
}
```
2. Then we can use the `fieldSelector` to set the `DefaultFieldSelector` value:
```go
ctrlOptions := ctrl.Options{
...
Cache: cache.Options{
DefaultFieldSelector: fieldSelector,
....
}
```
and then the namespaces contained in `watchNamespacesList` will be excluded from watching by the instance.


### Security Model

A service account will be created for each namespace with CAPI instance.
In the simple deployment example we are considering that all CAPI-instances will share the one cluster role `capi-aggregated-manager-role` so all CAPI's service accounts will be bound using then cluster role binding `capi-manager-rolebinding`.
We can also use multiple cluster roles and grant the access more granular only to the namespaces watched by the instance.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will all the controllers use the same cloud credentials or will each namespace have its own cloud credentials?

### Risks and Mitigations


- What are the risks of this proposal and how do we mitigate? Think broadly.
- How will UX be reviewed and by whom?
- How will security be reviewed and by whom?
- Consider including folks that also work outside the SIG or subproject.

## Alternatives

The `Alternatives` section is used to highlight and record other possible approaches to delivering the value proposed by a proposal.

## Upgrade Strategy

We don't expect any changes while upgrading.

## Additional Details

### Test Plan

Expectations:
* create only E2E testcases using kind
* deploy two instances into `capi1-system` and `capi2-system` namespaces
* crate three namespaces `watch1`, `watch2`, `watch3` for the watching
* test correct watching:
* watch on all namespace (without `--namespace` and `--excluded-namespace`)
* watch on namespaces `watch1`, `watch2` (only events in `watch1` and `watch1` no events from `watch3` are accepted):
* using `--namespace watch1` and `--namespace watch2`
* using `--excluded-namespace watch3`


## Implementation History

<!--
- [ ] MM/DD/YYYY: Proposed idea in an issue or [community meeting]
- [ ] MM/DD/YYYY: Compile a Google Doc following the CAEP template (link here)
- [ ] MM/DD/YYYY: First round of feedback from community
- [ ] MM/DD/YYYY: Present proposal at a [community meeting]
- [ ] MM/DD/YYYY: Open proposal PR

[community meeting](https://www.youtube.com/watch?v=COcM5bpbusI)
-->
Loading