-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pods require extended privileges but still use the default service account #5
Comments
Thanks for bringing this to my attention. To my knowledge this is the first time anyone has used anything other than a local "kind" cluster for KNE (we certainly haven't been doing this). The intended model for KNE is that the cluster itself is disposable and that it's only running other network OS and traffic generator containers. From what I can gather, "security context constraints" are an OpenShift-specific construct and other cloud providers have approximately similar offerings for controlling privileges on a per-pod or per-service-account basis. We aren't interested in integrating any technology specific to a given cloud, so there is no possibility the operator will transparently configure these constraints. If I'm understanding correctly, what you're asking is:
So it would still be your responsibility to make sure the cluster is configured to allow the new service account to create privileged containers, but exactly how that's done wouldn't be tied to the operator. |
@burnyd has tried running KNE on an actual k8s cluster and this is where I originally got the idea from. While emulating a topology on kind certainly works for most developers, my idea was to setup CI workflows to pre-validate hardware tests in a central instance. Also in my experience, the system requirements of running a larger topology on your local machine usually exceeds the system specs. I understand that SCCs are an OpenShift-specific construct and being vendor agnostic is the right way to go. Anyhow, what I would need is the following: ➜ ~ kubectl get pods -n ceosnms dc1-spine1 -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2022-11-01T14:52:57Z"
labels:
app: dc1-spine1
topo: ceosnms
name: dc1-spine1
namespace: ceosnms
ownerReferences:
- apiVersion: [ceoslab.arista.com/v1alpha1](http://ceoslab.arista.com/v1alpha1)
blockOwnerDeletion: true
controller: true
kind: CEosLabDevice
name: dc1-spine1
uid: e22c2913-7d26-48ca-ad33-da0c071b7a98
resourceVersion: "2869042"
uid: 2135a16b-8bcf-4906-919c-996545b9bebb
spec:
<truncated>
securityContext: {}
serviceAccount: default Change the service account default for this kind of pod to something else that is well known in advance, lets say ceos-lab. This would allow the cluster ops team to patch RBAC for ceos-lab individually without impacting other workloads privileges in the namespace. Maybe this is even a feature for KNE itself, where the name of the SA that should be used can be configured. |
Just a note: KNE definitely wants to support other cluster types than just kind. We have been working internally to use KNE on a multinode cluster solution (kubeadm). Currently in the deployment config you can specify |
This issue is closely related to open-traffic-generator/keng-operator#18. I assume that a similar issue occurs for other vendors as well and it might be worth discussing a more generic solution to this issue, that allows users to run KNE with different flavors of Kubernetes.
When deploying a topology with Arista cEOS nodes to a cluster a pod for each virtual instance is created. Since they are not using a specific service account on OpenShift only minimal privileges are used to run the container. This causes logs in the controller manager such as this entry:
For more details please check the attached log file.
arista-ceoslab-operator-controller-manager-5ff748b8db-x6wmn-manager.log
This seems to be fixable by extending the privileges of the default service account as shown below, but in general this is not a practice recommend anywhere as other pods that do not specify an other service account will also inherit these privileges.
A better solution would be to use a dedicated service account for pods created by the controller, so that extending privileges is limited to a specific set of application running in this namespace.
The text was updated successfully, but these errors were encountered: