You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on May 10, 2019. It is now read-only.
We currently are happy kuberos users in two on-premise k8s clusters, dev.k8s, and prod1.k8s. Both tie into dex which hits our corporate Active Directory. When kuberos renders the username in a kubecfg.yaml, it uses the mail attribute of the user from AD as the name. When a user downloads config files from both the dev and prod1 cluster, the username is the same but the credentials are different. This causes authentication to fail to one of the two clusters.
This RFE is asking for a way to template the username in the config instead of:
Perhaps the username@domain bit could have a prefix or suffix to denote something? Then the rendered config could have something more like this in 1 cluster, and the existing functionality in another cluster:
We simply set the KUBECONFIG variable to include both files such as: /home/jschroeder/.kube/config:/home/jschroeder/.kube/prod.k8s.yaml:/home/jschroeder/.kube/dev.k8s.yaml
The text was updated successfully, but these errors were encountered:
We currently are happy kuberos users in two on-premise k8s clusters, dev.k8s, and prod1.k8s. Both tie into dex which hits our corporate Active Directory. When kuberos renders the username in a kubecfg.yaml, it uses the mail attribute of the user from AD as the name. When a user downloads config files from both the dev and prod1 cluster, the username is the same but the credentials are different. This causes authentication to fail to one of the two clusters.
This RFE is asking for a way to template the username in the config instead of:
Perhaps the username@domain bit could have a prefix or suffix to denote something? Then the rendered config could have something more like this in 1 cluster, and the existing functionality in another cluster:
We simply set the
KUBECONFIG
variable to include both files such as:/home/jschroeder/.kube/config:/home/jschroeder/.kube/prod.k8s.yaml:/home/jschroeder/.kube/dev.k8s.yaml
The text was updated successfully, but these errors were encountered: