Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

📖 Namespace separation proposal #11691

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

marek-veber
Copy link

What type of PR is this?:

/kind documentation

What this PR does / why we need it:

This PR serves as a starting point for the discussion about running multiple instances and defining which installation will watch which namespace.

See issues:

@k8s-ci-robot k8s-ci-robot added kind/documentation Categorizes issue or PR as related to documentation. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/needs-area PR is missing an area label labels Jan 15, 2025
@k8s-ci-robot k8s-ci-robot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Jan 15, 2025
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign enxebre for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot
Copy link
Contributor

Hi @marek-veber. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Copy link
Contributor

@elmiko elmiko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks @marek-veber , the design makes sense to me, i have some questions about various details.

a provisioning cluster which is provisioned cluster at the same time

also, i'm finding this phrase to be slightly confusing, is there a clearer way to state it?


## Motivation
Our motivation is to have a provisioning cluster which is provisioned cluster at the same time while using hierarchical structure of clusters.
Two namespaces are used by management cluster and the rest of namespaces are watched by CAPI manager to manage other managed clusters.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think these sentences could be a little clearer, i'm not fully understanding the motivation.

* https://github.com/kubernetes-sigs/cluster-api/issues/11193

### Goals
We need to extend the existing feature to limit watching on specified namespace.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just to be clear this is extending the existing feature for supporting multiple instances of cluster-api?

Copy link
Contributor

@nrb nrb Jan 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's extending the existing --namespace option to allow more than 1 namespace to be watched, like L95 of this document.

We need to extend the existing feature to limit watching on specified namespace.
We need to run multiple CAPI controller instances:
- each watching only specified namespaces: `capi1-system`, …, `capi$(N-1)-system`
- and the last resort instance to watch the rest of namespaces excluding the namespaces already watched by previously mentioned instances
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there prior art on cluster-api controllers watching multiple namespaces? (just curious)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://github.com/kubernetes-sigs/cluster-api/blob/main/main.go#L170-L171 shows that we can take in a single namespace, which eventually gets specified as a controller-runtime cache.Option.DefaultNamespaces option (https://github.com/kubernetes-sigs/cluster-api/blob/main/main.go#L346).

The documentation for that field (https://github.com/kubernetes-sigs/controller-runtime/blob/main/pkg/cache/cache.go#L182-L193) implies to me that multiple namespaces are supported, but I'm not familiar with the implementation, and I know that somewhat recently @sbueringer and @fabriziopandini put a lot of effort into making sure caching was optimized.

As I understand things right now, CAPI's controllers will watch either 1 namespace or all namespaces. Watching a number of namespaces between those two extremes is either not supported or well understood right now, based on my interpretation of what was said in the community meeting.

```

### User Stories
We need to deploy two CAPI instances in the same cluster and divide the list of namespaces to assign some well known namespaces to be watched from the first instance and rest of them to assign to the second instace.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this user story helps me to understand the motivation a little better, it might be nice to have some of this language in that section too.

### User Stories
We need to deploy two CAPI instances in the same cluster and divide the list of namespaces to assign some well known namespaces to be watched from the first instance and rest of them to assign to the second instace.

#### Story 1 - RedHat Hierarchical deployment using CAPI
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i don't think this is specific to Red Hat, it seems anyone who is doing this type of heirarchical deployment could gain benefit from this enhancement.

i do think it's nice to include the links to the Red Hat jiras.

A service account will be created for each namespace with CAPI instance.
In the simple deployment example we are considering that all CAPI-instances will share the one cluster role `capi-aggregated-manager-role` so all CAPI's service accounts will be bound using then cluster role binding `capi-manager-rolebinding`.
We can also use multiple cluster roles and grant the access more granular only to the namespaces watched by the instance.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will all the controllers use the same cloud credentials or will each namespace have its own cloud credentials?

@enxebre
Copy link
Member

enxebre commented Jan 16, 2025

I think overall the motivation behind this proposal should be something like "Enable adoption in advance multi tenant scenarios".

Use case:
As a Service Provider/Consumer I own a management cluster that allocates and manages the lifecycle of Kubernetes clusters powered by CAPI using at least 2 different paradigms.
Paradigm 1:
Each cluster of type 1 runs each own suite of capi controllers targeting a particular namespace as a hidden implementation engine. Don't use webhooks. Motivations:

  • Granular versioning lifecycling
  • Granular Logging and forwarding mechanism
  • Granular metrics
  • Granular resource consumption budget
  • Security requirements
    -- Per cluster Network policies isolation
    -- Per cluster Cloud provider creds isolation
    -- Per cluster Kubeconfig access isolation

Paradigm 2:
Each cluster of type 2 is managed by a common centralized suite of CAPI controllers with different requirements/constraints to the listed above.

For both paradigms to coexist, paradigm 2 wants a way to restrict its scope and not be aware of CAPI resources owned by paradigm1.

cc @fabriziopandini @sbueringer @serngawy I'm catching up with yesterday community call so wanted to share my thoughts on the topic. I agree with the concerns raised on the call. Thinks that I think could be explored:

Use watchFilterValue
Use RBAC
Add some flexibility to let the manager cache to filter out

<!-- END doctoc generated TOC please keep comment here to allow auto update -->

## Summary
We need to run multiple CAPI instances in one cluster and divide the namespaces to be watched by given instances.
Copy link
Contributor

@serngawy serngawy Jan 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lets add the ability to run a single CAPI instance that can watch group of namespaces OR exclude group of namespaces from been watched.

* https://github.com/kubernetes-sigs/cluster-api/pull/11370 add the new commandline option `--excluded-namespace=<ns1, …>`

## Motivation
Our motivation is to have a provisioning cluster which is provisioned cluster at the same time while using hierarchical structure of clusters.
Copy link
Contributor

@serngawy serngawy Jan 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Motivation:

For multi-tenant environment a cluster is used as provision-er using different CAPI providers.
Using CAPI requires careful consideration of namespace isolation to maintain security and operational boundaries between tenants. In such setups, it is essential to configure the CAPI controller instances to either watch or exclude specific groups of namespaces based on the isolation requirements. This can be achieved by setting up namespace-scoped controllers or applying filters, such as label selectors, to define the namespaces each instance should monitor. By doing so, administrators can ensure that the activities of one tenant do not interfere with others, while also reducing the resource overhead by limiting the scope of CAPI operations. This approach enhances scalability, security, and manageability, making it well-suited for environments with strict multi-tenancy requirements.

- and the last resort instance to watch the rest of namespaces excluding the namespaces already watched by previously mentioned instances

This change is only a small and strait forward update of the existing feature to limit watching on specified namespace by commandline `--namespace <ns>`

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lets clarify the goal is to let the CAPI instance able to watch/exclude group of namespaces which will enhances the scalability, security, and manageability in multi tenant environment

@nrb
Copy link
Contributor

nrb commented Jan 16, 2025

/area provider/core

@k8s-ci-robot k8s-ci-robot added area/provider/core Issues or PRs related to the core provider and removed do-not-merge/needs-area PR is missing an area label labels Jan 16, 2025
@nrb
Copy link
Contributor

nrb commented Jan 16, 2025

I like @enxebre's explanation of paradigms.

As I've understood the goals, another way of stating 2 paradigms could be a CAPI install for self-managing the cluster (paradigm 1) and a CAPI install managing everything else that users may create (paradigm 2).

In this scenario, the self-managing CAPI install wants to watch exactly 1 namespace (let's call it ns/internal-capi), and I think that's covered with the --namespace option today.

The CAPI installation for everything else, then, would like to watch all namespaces except ns/internal-capi. I don't think this option is supported today, and would be served by implementing #11193 in some fashion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/provider/core Issues or PRs related to the core provider cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/documentation Categorizes issue or PR as related to documentation. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants