Skip to content

Creating a cluster with Cilium CNI doesn't workΒ #676

@radekg

Description

@radekg

I'm using the following configuration file:

---
kubeconfig_path: "~/.hetzner/kube/gmbh"
cluster_name: gmbh-infrastructure
k3s_version: v1.30.3+k3s1

networking:
  ssh:
    port: 22
    use_agent: false
    public_key_path: "~/.hetzner/ssh/gmbh_rsa.pub"
    private_key_path: "~/.hetzner/ssh/gmbh_rsa"
  allowed_networks:
    ssh: 
    - 0.0.0.0/0
    api:
    - 0.0.0.0/0
  public_network:
    ipv4: true
    ipv6: false
  private_network:
    enabled: true
    subnet: 10.0.0.0/16
  cni:
    enabled: true
    encryption: false
    mode: cilium
    cilium:
      # Optional: specify a path to a custom values file for Cilium Helm chart
      # When specified, this file will be used instead of the default values
      helm_values_path: "/home/radek/dev/hetzner/gmbh/cilium-values.yaml"
      chart_version: "v1.18.2"
  cluster_cidr: 10.244.0.0/16 # optional: a custom IPv4/IPv6 network CIDR to use for pod IPs
  service_cidr: 10.43.0.0/16 # optional: a custom IPv4/IPv6 network CIDR to use for service IPs. Warning, if you change this, you should also change cluster_dns!
  cluster_dns: 10.43.0.10 # optional: IPv4 Cluster IP for coredns service. Needs to be an address from the service_cidr range

schedule_workloads_on_masters: false
protect_against_deletion: false

# image: rocky-9 # optional: default is ubuntu-24.04
# snapshot_os: microos # otional: specified the os type when using a custom snapshot
masters_pool:
  instance_type: cpx21
  instance_count: 1
  locations:
  - nbg1

worker_node_pools:
- name: small
  instance_type: cpx21
  instance_count: 3
  location: nbg1

The cluster never comes up successfully:

[Configuration] Validating configuration...
[Configuration] ...configuration seems valid.
[Private Network] Creating private network...
[Private Network] ...private network created
[SSH key] Creating SSH key...
[SSH key] ...SSH key created
[Instance gmbh-infrastructure-master1] Creating instance gmbh-infrastructure-master1 (attempt 1)...
[Instance gmbh-infrastructure-master1] Instance status: initializing
[Instance gmbh-infrastructure-master1] Powering on instance (attempt 1)
[Instance gmbh-infrastructure-master1] Waiting for instance to be powered on...
[Instance gmbh-infrastructure-master1] Instance status: running
[Instance gmbh-infrastructure-master1] ...instance gmbh-infrastructure-master1 created
[Firewall] Creating firewall...
[Firewall] ...firewall created
[Instance gmbh-infrastructure-pool-small-worker1] Creating instance gmbh-infrastructure-pool-small-worker1 (attempt 1)...
[Instance gmbh-infrastructure-pool-small-worker3] Creating instance gmbh-infrastructure-pool-small-worker3 (attempt 1)...
[Instance gmbh-infrastructure-pool-small-worker2] Creating instance gmbh-infrastructure-pool-small-worker2 (attempt 1)...
[Instance gmbh-infrastructure-master1] πŸ•’ Awaiting cloud config (may take a minute...)
[Instance gmbh-infrastructure-master1] .
[Instance gmbh-infrastructure-pool-small-worker3] Instance status: initializing
[Instance gmbh-infrastructure-pool-small-worker3] Powering on instance (attempt 1)
[Instance gmbh-infrastructure-pool-small-worker2] Instance status: initializing
[Instance gmbh-infrastructure-pool-small-worker2] Powering on instance (attempt 1)
[Instance gmbh-infrastructure-pool-small-worker3] Waiting for instance to be powered on...
[Instance gmbh-infrastructure-pool-small-worker2] Waiting for instance to be powered on...
[Instance gmbh-infrastructure-pool-small-worker1] Instance status: initializing
[Instance gmbh-infrastructure-pool-small-worker1] Powering on instance (attempt 1)
[Instance gmbh-infrastructure-pool-small-worker1] Waiting for instance to be powered on...
[Instance gmbh-infrastructure-master1] Cloud init finished: 24.67 - Thu, 30 Oct 2025 12:47:54 +0000 - v. 25.1.4-0ubuntu0~24.04.1
[Instance gmbh-infrastructure-master1] Private network interface enp7s0 found
[Instance gmbh-infrastructure-master1] Private network IP: 10.0.0.2
[Instance gmbh-infrastructure-master1] Installing k3s...
[Instance gmbh-infrastructure-master1] [INFO]  Using v1.30.3+k3s1 as release
[Instance gmbh-infrastructure-master1] [INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.30.3+k3s1/sha256sum-amd64.txt
[Instance gmbh-infrastructure-master1] [INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.30.3+k3s1/k3s
[Instance gmbh-infrastructure-master1] [INFO]  Verifying binary download
[Instance gmbh-infrastructure-master1] [INFO]  Installing k3s to /usr/local/bin/k3s
[Instance gmbh-infrastructure-master1] [INFO]  Skipping installation of SELinux RPM
[Instance gmbh-infrastructure-master1] [INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[Instance gmbh-infrastructure-master1] [INFO]  Creating /usr/local/bin/crictl symlink to k3s
[Instance gmbh-infrastructure-master1] [INFO]  Creating /usr/local/bin/ctr symlink to k3s
[Instance gmbh-infrastructure-master1] [INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[Instance gmbh-infrastructure-master1] [INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[Instance gmbh-infrastructure-master1] [INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[Instance gmbh-infrastructure-master1] [INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[Instance gmbh-infrastructure-master1] [INFO]  systemd: Enabling k3s unit
[Instance gmbh-infrastructure-master1] Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service β†’ /etc/systemd/system/k3s.service.
[Instance gmbh-infrastructure-master1] [INFO]  systemd: Starting k3s
[Instance gmbh-infrastructure-pool-small-worker3] Instance status: starting
[Instance gmbh-infrastructure-pool-small-worker3] Powering on instance (attempt 1)
[Instance gmbh-infrastructure-pool-small-worker2] Instance status: initializing
[Instance gmbh-infrastructure-pool-small-worker2] Powering on instance (attempt 1)
[Instance gmbh-infrastructure-pool-small-worker3] Waiting for instance to be powered on...
[Instance gmbh-infrastructure-pool-small-worker2] Waiting for instance to be powered on...
[Instance gmbh-infrastructure-pool-small-worker1] Instance status: initializing
[Instance gmbh-infrastructure-pool-small-worker1] Powering on instance (attempt 1)
[Instance gmbh-infrastructure-pool-small-worker1] Waiting for instance to be powered on...
[Instance gmbh-infrastructure-master1] k3s installation completed successfully
[Instance gmbh-infrastructure-master1] Waiting for the control plane to be ready...
[Instance gmbh-infrastructure-pool-small-worker3] Instance status: running
[Instance gmbh-infrastructure-pool-small-worker2] Instance status: running
[Instance gmbh-infrastructure-pool-small-worker1] Instance status: running
[Control plane] Generating the kubeconfig file to /home/radek/.hetzner/kube/gmbh...
[Control plane] Switched to context "gmbh-infrastructure-master1".
[Control plane] ...kubeconfig file generated as /home/radek/.hetzner/kube/gmbh.
[Instance gmbh-infrastructure-pool-small-worker3] ...instance gmbh-infrastructure-pool-small-worker3 created
[Instance gmbh-infrastructure-master1] Validating master setup...
[Master Validation] βœ… Master validation successful
[Control plane] Generating the kubeconfig file to /home/radek/.hetzner/kube/gmbh...
[Control plane] Switched to context "gmbh-infrastructure-master1".
[Control plane] ...kubeconfig file generated as /home/radek/.hetzner/kube/gmbh.
[CNI] Installing Cilium...
[CNI] "cilium" already exists with the same configuration, skipping
[CNI] Release "cilium" does not exist. Installing it now.
[Instance gmbh-infrastructure-pool-small-worker2] ...instance gmbh-infrastructure-pool-small-worker2 created
[CNI] NAME: cilium
[CNI] LAST DEPLOYED: Thu Oct 30 13:48:22 2025
[CNI] NAMESPACE: kube-system
[CNI] STATUS: deployed
[CNI] REVISION: 1
[CNI] TEST SUITE: None
[CNI] NOTES:
[CNI] You have successfully installed Cilium with Hubble Relay and Hubble UI.
[CNI] 
[CNI] Your release version is 1.18.2.
[CNI] 
[CNI] For any further help, visit https://docs.cilium.io/en/v1.18/gettinghelp
[CNI] Waiting for daemon set "cilium" rollout to finish: 0 of 1 updated pods are available...
[Instance gmbh-infrastructure-pool-small-worker1] ...instance gmbh-infrastructure-pool-small-worker1 created

Workers never join. Frankly, no kubelet is installed on any worker nodes... All I see is:

kubectl --kubeconfig=/home/radek/.hetzner/kube/gmbh get nodes -o wide
NAME                          STATUS     ROLES                       AGE   VERSION        INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
gmbh-infrastructure-master1   NotReady   control-plane,etcd,master   69s   v1.30.3+k3s1   10.0.0.2      <none>        Ubuntu 24.04.3 LTS   6.8.0-71-generic   containerd://1.7.17-k3s1

This seems to be happening because the CNI installation step happens before workers are bootstrapped. And as there are no workers, there's nowhere for cilium to deploy its pods.

https://github.com/vitobotta/hetzner-k3s/blob/main/src/kubernetes/installer.cr#L62-L68

The software installation should be happening after workers join, no...?

Changing the cni.mode to flannel works. But I don't want flannel.

How does one create a cluster with Cilium instead of flannel?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions