Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hardening options are not applying and breaking installation #11842

Open
dmakeienko opened this issue Dec 30, 2024 · 1 comment
Open

Hardening options are not applying and breaking installation #11842

dmakeienko opened this issue Dec 30, 2024 · 1 comment
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@dmakeienko
Copy link

dmakeienko commented Dec 30, 2024

What happened?

For deploying k8s on Openstack, I've made following structure:

`

├── infra
├── inventory
│   ├── artifacts
│   │   └── admin.conf
│   ├── credentials
│   │   └── kube_encrypt_token.creds
│   └── group_vars
│       ├── all.yaml
│       ├── hardening.yaml
│       └── openstack.yaml
├── kubespray

`

Obviously, all.yaml contains all necessary configuration for a cluster, openstack.yaml contains some values for openstack that are generated by terraform, and hardening.yaml contains most of the configuration from https://github.com/kubernetes-sigs/kubespray/blob/master/docs/operations/hardening.md

I deploy cluster with this command:
ansible-playbook -i ../inventory -b -vvvvv --private-key=~/.ssh/admin cluster.yml
Then, I check everything and find that about nothing from hardening.yaml have applied.
Next step, I try to specifically point to that file:

ansible-playbook -i ../inventory -b -vvvvv --private-key=~/.ssh/dev1-k8s-admin -e "@../inventory/group_vars/hardening.yaml" cluster.yml
After that command, about half options have applied:

  • tls_min_version
  • tls_cipher_suites
  • encryption at rest
  • kube_apiserver_enable_admission_plugins
  • kubelet_systemd_hardening

But other still missing, i.e. kube_audit logs, kube_scheduler_bind_address, kube_controller_manager_bind_address

Then I tried to run this command:
ansible-playbook -i ../inventory -b -vvvvv --private-key=~/.ssh/dev1-k8s-admin -e "@../inventory/group_vars/hardening.yaml" -e upgrade_cluster_setup=true cluster.yml
but nothing changed. I've checked /etc/kubernetes/manifests/kube-api-server.yaml but it didn't change, however, file kubeadm-config.yaml contained audit logs parameters.

Then I've combined ALL variables into single file all.yaml, recreated infrastructure completely and ran
ansible-playbook -i ../inventory -b -vvvvv --private-key=~/.ssh/admin cluster.yml

However, kubespray failed on a step when it tries to install kubelet-csr-approver and I got following error:
kubespray stderr: Error: release kubelet-csr-approver failed, and has been uninstalled due to atomic being set: context deadline exceeded
When that happened i've noticed, that all nodes were tainted with the following taint:
node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
and kubelet-csr-approver failed to deploy because of that. I've tried to remove those taints and ran cluster.yaml again and kubelet-csr-approver successfully deployed.

Also, no matter what I do, some options simply not applying. Example:

  • kube_controller_manager_bind_address: 127.0.0.1
  • kube_scheduler_bind_address: 127.0.0.1

And other options completely breaks installation process, such as:

  • remove_anonymous_access: true
  • etcd_deployment_type: kubeadm

Regarding the remove_anonymous_access, the problem is similar to this issue

kube-apiserver.yaml

apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 10.21.164.83:6443
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=10.21.164.83
    - --allow-privileged=true
    - --anonymous-auth=True
    - --apiserver-count=3
    - --authorization-mode=Node,RBAC
    - --bind-address=0.0.0.0
    - --client-ca-file=/etc/kubernetes/ssl/ca.crt
    - --default-not-ready-toleration-seconds=300
    - --default-unreachable-toleration-seconds=300
    - --enable-admission-plugins=NodeRestriction
    - --enable-aggregator-routing=False
    - --enable-bootstrap-token-auth=true
    - --endpoint-reconciler-type=lease
    - --etcd-cafile=/etc/ssl/etcd/ssl/ca.pem
    - --etcd-certfile=/etc/ssl/etcd/ssl/node-dev1-k8s-master-0.pem
    - --etcd-compaction-interval=5m0s
    - --etcd-keyfile=/etc/ssl/etcd/ssl/node-dev1-k8s-master-0-key.pem
    - --etcd-servers=https://10.21.164.83:2379,https://10.21.164.229:2379,https://10.21.164.111:2379
    - --event-ttl=1h0m0s
    - --kubelet-client-certificate=/etc/kubernetes/ssl/apiserver-kubelet-client.crt
    - --kubelet-client-key=/etc/kubernetes/ssl/apiserver-kubelet-client.key
    - --kubelet-preferred-address-types=InternalDNS,InternalIP,Hostname,ExternalDNS,ExternalIP
    - --oidc-client-id=dev1-k8s
    - --oidc-groups-claim=groups
    - --oidc-issuer-url=https://kk.in/realms/master
    - --oidc-username-claim=preferred_username
    - --profiling=False
    - --proxy-client-cert-file=/etc/kubernetes/ssl/front-proxy-client.crt
    - --proxy-client-key-file=/etc/kubernetes/ssl/front-proxy-client.key
    - --request-timeout=1m0s
    - --requestheader-allowed-names=front-proxy-client
    - --requestheader-client-ca-file=/etc/kubernetes/ssl/front-proxy-ca.crt
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-username-headers=X-Remote-User
    - --secure-port=6443
    - --service-account-issuer=https://kubernetes.default.svc.cluster.local
    - --service-account-key-file=/etc/kubernetes/ssl/sa.pub
    - --service-account-lookup=True
    - --service-account-signing-key-file=/etc/kubernetes/ssl/sa.key
    - --service-cluster-ip-range=10.233.0.0/18
    - --service-node-port-range=30000-32767
    - --storage-backend=etcd3
    - --tls-cert-file=/etc/kubernetes/ssl/apiserver.crt
    - --tls-private-key-file=/etc/kubernetes/ssl/apiserver.key
    image: registry.k8s.io/kube-apiserver:v1.31.4
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 10.21.164.83
        path: /livez
        port: 6443
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    name: kube-apiserver
    readinessProbe:
      failureThreshold: 3
      httpGet:
        host: 10.21.164.83
        path: /readyz
        port: 6443
        scheme: HTTPS
      periodSeconds: 1
      timeoutSeconds: 15
    resources:
      requests:
        cpu: 250m
    startupProbe:
      failureThreshold: 24
      httpGet:
        host: 10.21.164.83
        path: /livez
        port: 6443
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/ca-certificates
      name: etc-ca-certificates
      readOnly: true
    - mountPath: /etc/ssl/etcd/ssl
      name: etcd-certs-0
      readOnly: true
    - mountPath: /etc/kubernetes/ssl
      name: k8s-certs
      readOnly: true
    - mountPath: /usr/local/share/ca-certificates
      name: usr-local-share-ca-certificates
      readOnly: true
    - mountPath: /usr/share/ca-certificates
      name: usr-share-ca-certificates
      readOnly: true
  hostNetwork: true
  priority: 2000001000
  priorityClassName: system-node-critical
  securityContext:
    seccompProfile:
      type: RuntimeDefault
  volumes:
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/ca-certificates
      type: DirectoryOrCreate
    name: etc-ca-certificates
  - hostPath:
      path: /etc/ssl/etcd/ssl
      type: DirectoryOrCreate
    name: etcd-certs-0
  - hostPath:
      path: /etc/kubernetes/ssl
      type: DirectoryOrCreate
    name: k8s-certs
  - hostPath:
      path: /usr/local/share/ca-certificates
      type: DirectoryOrCreate
    name: usr-local-share-ca-certificates
  - hostPath:
      path: /usr/share/ca-certificates
      type: ""
    name: usr-share-ca-certificates
status: {}

kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.21.164.83
  bindPort: 6443
nodeRegistration:
  name: "dev1-k8s-master-0"
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/control-plane
  criSocket: unix:///var/run/containerd/containerd.sock
  kubeletExtraArgs:
  - name: cloud-provider
    value: external
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
clusterName: cluster.local
encryptionAlgorithm: RSA-2048
etcd:
  external:
      endpoints:
      - https://10.21.164.83:2379
      - https://10.21.164.229:2379
      - https://10.21.164.111:2379
      caFile: /etc/ssl/etcd/ssl/ca.pem
      certFile: /etc/ssl/etcd/ssl/node-dev1-k8s-master-0.pem
      keyFile: /etc/ssl/etcd/ssl/node-dev1-k8s-master-0-key.pem
dns:
  imageRepository: registry.k8s.io/coredns
  imageTag: v1.11.3
networking:
  dnsDomain: cluster.local
  serviceSubnet: "10.233.0.0/18"
  podSubnet: "10.233.64.0/18"
kubernetesVersion: v1.31.4
controlPlaneEndpoint: 10.21.164.83:6443
certificatesDir: /etc/kubernetes/ssl
imageRepository: registry.k8s.io
apiServer:
  extraArgs:
  - name: etcd-compaction-interval
    value: "5m0s"
  - name: default-not-ready-toleration-seconds
    value: "300"
  - name: default-unreachable-toleration-seconds
    value: "300"
  - name: anonymous-auth
    value: "True"
  - name: authorization-mode
    value: "Node,RBAC"
  - name: bind-address
    value: "0.0.0.0"
  - name: enable-admission-plugins
    value: "NodeRestriction"
  - name: admission-control-config-file
    value: "/etc/kubernetes/admission-controls.yaml"
  - name: apiserver-count
    value: "3"
  - name: endpoint-reconciler-type
    value: lease
  - name: service-node-port-range
    value: "30000-32767"
  - name: service-cluster-ip-range
    value: "10.233.0.0/18"
  - name: kubelet-preferred-address-types
    value: "InternalDNS,InternalIP,Hostname,ExternalDNS,ExternalIP"
  - name: profiling
    value: "False"
  - name: request-timeout
    value: "120s"
  - name: enable-aggregator-routing
    value: "False"
  - name: service-account-lookup
    value: "True"
  - name: oidc-issuer-url
    value: "https://kk.com/realms/master"
  - name: oidc-client-id
    value: "dev1-k8s"
  - name: oidc-username-claim
    value: "preferred_username"
  - name: oidc-groups-claim
    value: "groups"
  - name: encryption-provider-config
    value: "/etc/kubernetes/ssl/secrets_encryption.yaml"
  - name: storage-backend
    value: "etcd3"
  - name: allow-privileged
    value: "true"
  - name: audit-policy-file
    value: "/etc/kubernetes/audit-policy/apiserver-audit-policy.yaml"
  - name: audit-log-path
    value: "/var/log/kube-apiserver-log.json"
  - name: audit-log-maxage
    value: "30"
  - name: audit-log-maxbackup
    value: "10"
  - name: audit-log-maxsize
    value: "100"
  - name: tls-min-version
    value: "VersionTLS12"
  - name: tls-cipher-suites
    value: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305"
  - name: event-ttl
    value: "1h0m0s"
  - name: kubelet-certificate-authority
    value: "/etc/kubernetes/ssl/ca.crt"
  extraVolumes:
  - name: audit-policy
    hostPath: /etc/kubernetes/audit-policy
    mountPath: /etc/kubernetes/audit-policy
  - name: audit-logs
    hostPath: /var/log/kubernetes/audit
    mountPath: /var/log
    readOnly: false
  - name: admission-control-configs
    hostPath: /etc/kubernetes/admission-controls
    mountPath: /etc/kubernetes
    readOnly: false
    pathType: DirectoryOrCreate
  - name: usr-share-ca-certificates
    hostPath: /usr/share/ca-certificates
    mountPath: /usr/share/ca-certificates
    readOnly: true
  certSANs:
  - "kubernetes"
  - "kubernetes.default"
  - "kubernetes.default.svc"
  - "kubernetes.default.svc.cluster.local"
  - "10.233.0.1"
  - "localhost"
  - "127.0.0.1"
  - "dev1-k8s-master-0"
  - "dev1-k8s-master-1"
  - "dev1-k8s-master-2"
  - "api-dev1-k8s.internal"
  - "10.21.164.83"
  - "10.21.164.229"
  - "10.21.164.111"
  - "dev1-k8s-master-0.cluster.local"
  - "dev1-k8s-master-1.cluster.local"
  - "dev1-k8s-master-2.cluster.local"
controllerManager:
  extraArgs:
  - name: node-monitor-grace-period
    value: "40s"
  - name: node-monitor-period
    value: "5s"
  - name: cluster-cidr
    value: "10.233.64.0/18"
  - name: service-cluster-ip-range
    value: "10.233.0.0/18"
  - name: node-cidr-mask-size
    value: "24"
  - name: profiling
    value: "False"
  - name: terminated-pod-gc-threshold
    value: "50"
  - name: bind-address
    value: "127.0.0.1"
  - name: leader-elect-lease-duration
    value: "15s"
  - name: leader-elect-renew-deadline
    value: "10s"
  - name: feature-gates
    value: "RotateKubeletServerCertificate=true"
  - name: configure-cloud-routes
    value: "false"
  - name: tls-min-version
    value: "VersionTLS12"
  - name: tls-cipher-suites
    value: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305"
scheduler:
  extraArgs:
  - name: bind-address
    value: "127.0.0.1"
  - name: config
    value: "/etc/kubernetes/kubescheduler-config.yaml"
  - name: profiling
    value: "False"
  - name: tls-min-version
    value: "VersionTLS12"
  - name: tls-cipher-suites
    value: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305"
  extraVolumes:
  - name: kubescheduler-config
    hostPath: /etc/kubernetes/kubescheduler-config.yaml
    mountPath: /etc/kubernetes/kubescheduler-config.yaml
    readOnly: true
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
bindAddress: 0.0.0.0
clientConnection:
  acceptContentTypes: 
  burst: 10
  contentType: application/vnd.kubernetes.protobuf
  kubeconfig: 
  qps: 5
clusterCIDR: "10.233.64.0/18"
configSyncPeriod: 15m0s
conntrack:
  maxPerCore: 32768
  min: 131072
  tcpCloseWaitTimeout: 1h0m0s
  tcpEstablishedTimeout: 24h0m0s
enableProfiling: False
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: "dev1-k8s-master-0"
iptables:
  masqueradeAll: False
  masqueradeBit: 14
  minSyncPeriod: 0s
  syncPeriod: 30s
ipvs:
  excludeCIDRs: []
  minSyncPeriod: 0s
  scheduler: rr
  syncPeriod: 30s
  strictARP: False
  tcpTimeout: 0s
  tcpFinTimeout: 0s
  udpTimeout: 0s
metricsBindAddress: 127.0.0.1:10249
mode: ipvs
nodePortAddresses: []
oomScoreAdj: -999
portRange: 
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
clusterDNS:
- 169.254.25.10
featureGates:
  RotateKubeletServerCertificate: true

What did you expect to happen?

I can deploy kubernetes from scratch with all hardening settings enabled.

How can we reproduce it (as minimally and precisely as possible)?

Create following structure:

`

├── infra
├── inventory
│   ├── artifacts
│   │   └── admin.conf
│   ├── credentials
│   │   └── kube_encrypt_token.creds
│   └── group_vars
│       ├── all.yaml
│       ├── hardening.yaml
│       └── openstack.yaml
├── kubespray

`
and use my values

OS

Linux 5.15.0-130-generic x86_64
PRETTY_NAME="Ubuntu 22.04.5 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.5 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy

Version of Ansible

ansible [core 2.16.14]
config file = /home/denys/project/k8s-svc/env/dev1/kubespray/ansible.cfg
configured module search path = ['/home/denys/project/k8s-svc/env/dev1/kubespray/library']
ansible python module location = /home/denys/project/k8s-svc/env/dev1/kubespray/venv/lib/python3.12/site-packages/ansible
ansible collection location = /home/denys/.ansible/collections:/usr/share/ansible/collections
executable location = /home/denys/project/k8s-svc/env/dev1/kubespray/venv/bin/ansible
python version = 3.12.3 (main, Nov 6 2024, 18:32:19) [GCC 13.2.0] (/home/denys/project/k8s-svc/env/dev1/kubespray/venv/bin/python3)
jinja version = 3.1.5
libyaml = True

Version of Python

Python 3.10.12

Version of Kubespray (commit)

3305ae9

Network plugin used

cilium

Full inventory with variables

all.yaml

cluster_name: cluster.local
helm_enabled: true
metrics_server_enabled: true
metrics_server_kubelet_insecure_tls: true

## k8s API LB config
apiserver_loadbalancer_domain_name: "api-dev1-k8s.com"
loadbalancer_apiserver_localhost: true
loadbalancer_apiserver_type: nginx
loadbalancer_apiserver_port: 6443


## Upstream dns servers
upstream_dns_servers:
  - 10.21.164.2
  - 10.21.164.3

cloud_provider: external
external_cloud_provider: openstack

#Openstack
external_openstack_lbaas_manage_security_groups: true
external_openstack_lbaas_create_monitor: true
external_openstack_lbaas_enabled: true
external_openstack_network_ipv6_disabled: true

#Ingress
ingress_nginx_enabled: true
ingress_nginx_nodeselector:
  node-role.kubernetes.io/worker: ""
ingress_nginx_extra_args:
   - --default-ssl-certificate=default/default-internal-cert
ingress_nginx_class: nginx

#k8s
kube_version: v1.31.4
kube_network_plugin: cilium
cilium_version: "v1.15.9"
kubeconfig_localhost: true

#system reserves
system_reserved: true
system_memory_reserved: 512Mi
system_cpu_reserved: 500m
system_master_memory_reserved: 256Mi
system_master_cpu_reserved: 250m
persistent_volumes_enabled: true

# OIDC (Keycloack)
kube_oidc_auth: true
kube_oidc_url: https://kk.com/realms/master
kube_oidc_client_id: dev1-k8s
kube_oidc_username_claim: preferred_username
kube_oidc_groups_claim: groups

# Storage
cinder_csi_enabled: true
expand_persistent_volumes: true
storage_classes:
  - name: "cinder-csi"
    is_default: true
    provisioner: "cinder.csi.openstack.org"
    parameters:
      type: "CephSSDmAttach"
      availability: "nova"
      allowVolumeExpansion: "True"
    reclaim_policy: "Delete"
    volume_binding_mode: "Immediate"

containerd_registries_mirrors:
  - prefix: docker.io
    mirrors:
      - host: https://docker-mirror.internal
        capabilities: ["pull", "resolve"]
        skip_verify: false

hardening.yaml

# Hardening
# https://github.com/kubernetes-sigs/kubespray/blob/master/docs/operations/hardening.md
## kube-apiserver
authorization_modes: ['Node', 'RBAC']
kube_apiserver_request_timeout: 120s
kube_apiserver_service_account_lookup: true

# enable kubernetes audit
kubernetes_audit: true
audit_log_path: "/var/log/kube-apiserver-log.json"
audit_log_maxage: 30
audit_log_maxbackups: 10
audit_log_maxsize: 100

tls_min_version: VersionTLS12
tls_cipher_suites:
  - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
  - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
  - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305

# enable encryption at rest
kube_encrypt_secret_data: true
kube_encryption_resources: [secrets]
kube_encryption_algorithm: "secretbox"

kube_apiserver_enable_admission_plugins:
  - NodeRestriction
kube_apiserver_admission_control_config_file: true

# Remove anonymous access to cluster
# remove_anonymous_access: true  BREAKING!!!! 

## kube-controller-manager
kube_controller_manager_bind_address: 127.0.0.1 #not working
kube_controller_terminated_pod_gc_threshold: 50
kube_controller_feature_gates: ["RotateKubeletServerCertificate=true"]

## kube-scheduler
kube_scheduler_bind_address: 127.0.0.1 #not working

## kubelet
kubelet_rotate_server_certificates: true
kubelet_rotate_certificates: true
kubelet_feature_gates: ["RotateKubeletServerCertificate=true"]
kubelet_protect_kernel_defaults: true #https://orca.security/resources/blog/kubernetes-nodes-kubelet-protect-kernel-defaults-is-set-false/
kubelet_streaming_connection_idle_timeout: "5m" #https://docs.prismacloud.io/en/enterprise-edition/policy-reference/kubernetes-policies/kubernetes-policy-index/ensure-that-the-streaming-connection-idle-timeout-argument-is-not-set-to-0
kubelet_make_iptables_util_chains: true #https://orca.security/resources/blog/kubernetes-nodes-kubelet-make-iptables-util-chains-is-set-false/
kubelet_systemd_hardening: true

#https://github.com/kubernetes-sigs/kubespray/issues/10102
kubelet_csr_approver_values:
  # Do not check DNS resolution in dev (not recommended in production)
  bypassDnsResolution: true

Command used to invoke ansible

ansible-playbook -i ../inventory -b -vvvvv --private-key=~/.ssh/key cluster.yml

Output of ansible run

PLAY RECAP *****************************************************************************************************************************************************
dev1-k8s-master-0          : ok=694  changed=160  unreachable=0    failed=0    skipped=1025 rescued=0    ignored=3   
dev1-k8s-master-1          : ok=613  changed=136  unreachable=0    failed=0    skipped=940  rescued=0    ignored=3   
dev1-k8s-master-2          : ok=615  changed=137  unreachable=0    failed=0    skipped=938  rescued=0    ignored=3   
dev1-k8s-worker-0          : ok=584  changed=106  unreachable=0    failed=0    skipped=747  rescued=0    ignored=1   
dev1-k8s-worker-1          : ok=584  changed=106  unreachable=0    failed=0    skipped=738  rescued=0    ignored=1   
dev1-k8s-worker-2          : ok=584  changed=106  unreachable=0    failed=0    skipped=738  rescued=0    ignored=1   

Monday 30 December 2024  20:47:11 +0200 (0:00:00.145)       0:19:35.584 ******* 
=============================================================================== 
download : Download_container | Download image if required --------------------------------------------------------------------------------------------- 50.05s
/home/denys/project/k8s-svc/env/dev1/kubespray/roles/download/tasks/download_container.yml:57 --------------------------------------------------------------------
download : Download_container | Download image if required --------------------------------------------------------------------------------------------- 33.30s
/home/denys/project/k8s-svc/env/dev1/kubespray/roles/download/tasks/download_container.yml:57 --------------------------------------------------------------------
download : Download_container | Download image if required --------------------------------------------------------------------------------------------- 32.57s
/home/denys/project/k8s-svc/env/dev1/kubespray/roles/download/tasks/download_container.yml:57 --------------------------------------------------------------------
kubernetes/preinstall : Install packages requirements -------------------------------------------------------------------------------------------------- 25.19s
/home/denys/project/k8s-svc/env/dev1/kubespray/roles/kubernetes/preinstall/tasks/0070-system-packages.yml:62 -----------------------------------------------------
download : Download_container | Download image if required --------------------------------------------------------------------------------------------- 24.67s
/home/denys/project/k8s-svc/env/dev1/kubespray/roles/download/tasks/download_container.yml:57 --------------------------------------------------------------------
download : Download_container | Download image if required --------------------------------------------------------------------------------------------- 22.05s
/home/denys/project/k8s-svc/env/dev1/kubespray/roles/download/tasks/download_container.yml:57 --------------------------------------------------------------------
kubernetes/control-plane : Control plane | wait for kube-controller-manager ---------------------------------------------------------------------------- 21.93s
/home/denys/project/k8s-svc/env/dev1/kubespray/roles/kubernetes/control-plane/handlers/main.yml:93 ---------------------------------------------------------------
download : Download_container | Download image if required --------------------------------------------------------------------------------------------- 21.79s
/home/denys/project/k8s-svc/env/dev1/kubespray/roles/download/tasks/download_container.yml:57 --------------------------------------------------------------------
download : Download_container | Download image if required --------------------------------------------------------------------------------------------- 20.86s
/home/denys/project/k8s-svc/env/dev1/kubespray/roles/download/tasks/download_container.yml:57 --------------------------------------------------------------------
download : Download_container | Download image if required --------------------------------------------------------------------------------------------- 19.56s
/home/denys/project/k8s-svc/env/dev1/kubespray/roles/download/tasks/download_container.yml:57 --------------------------------------------------------------------
download : Download_container | Download image if required --------------------------------------------------------------------------------------------- 18.40s
/home/denys/project/k8s-svc/env/dev1/kubespray/roles/download/tasks/download_container.yml:57 --------------------------------------------------------------------
download : Download_container | Download image if required --------------------------------------------------------------------------------------------- 15.76s
/home/denys/project/k8s-svc/env/dev1/kubespray/roles/download/tasks/download_container.yml:57 --------------------------------------------------------------------
download : Download_container | Download image if required --------------------------------------------------------------------------------------------- 15.14s
/home/denys/project/k8s-svc/env/dev1/kubespray/roles/download/tasks/download_container.yml:57 --------------------------------------------------------------------
kubernetes/control-plane : Kubeadm | Initialize first control plane node ------------------------------------------------------------------------------- 15.04s
/home/denys/project/k8s-svc/env/dev1/kubespray/roles/kubernetes/control-plane/tasks/kubeadm-setup.yml:167 --------------------------------------------------------
etcd : Gen_certs | Write etcd member/admin and kube_control_plane client certs to other etcd nodes ----------------------------------------------------- 14.73s
/home/denys/project/k8s-svc/env/dev1/kubespray/roles/etcd/tasks/gen_certs_script.yml:86 --------------------------------------------------------------------------
download : Download_container | Download image if required --------------------------------------------------------------------------------------------- 13.94s
/home/denys/project/k8s-svc/env/dev1/kubespray/roles/download/tasks/download_container.yml:57 --------------------------------------------------------------------
download : Download_container | Download image if required --------------------------------------------------------------------------------------------- 13.51s
/home/denys/project/k8s-svc/env/dev1/kubespray/roles/download/tasks/download_container.yml:57 --------------------------------------------------------------------
download : Download_file | Download item --------------------------------------------------------------------------------------------------------------- 12.80s
/home/denys/project/k8s-svc/env/dev1/kubespray/roles/download/tasks/download_file.yml:58 -------------------------------------------------------------------------
download : Download_file | Download item --------------------------------------------------------------------------------------------------------------- 11.76s
/home/denys/project/k8s-svc/env/dev1/kubespray/roles/download/tasks/download_file.yml:58 -------------------------------------------------------------------------
container-engine/containerd : Download_file | Download item -------------------------------------------------------------------------------------------- 11.07s
/home/denys/project/k8s-svc/env/dev1/kubespray/roles/download/tasks/download_file.yml:58 -------------------------------------------------------------------------

Anything else we need to know

No response

@dmakeienko dmakeienko added the kind/bug Categorizes issue or PR as related to a bug. label Dec 30, 2024
@ledroide
Copy link
Contributor

ledroide commented Jan 2, 2025

@dmakeienko : I confirm that setting remove_anonymous_access: true broke my cluster installation/update at step Create kubeadm client config. Ref issue #11835

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

2 participants