Skip to content

Conversation

@kartikjoshi21
Copy link
Contributor

cni: add ipv6/dual-stack support for calico cni

This is final PR which enables ipv6 completely with calico cni.

@k8s-ci-robot k8s-ci-robot requested a review from nirs December 8, 2025 08:14
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: kartikjoshi21
Once this PR has been reviewed and has the lgtm label, please assign prezha for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot requested a review from prezha December 8, 2025 08:14
@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. labels Dec 8, 2025
@k8s-ci-robot
Copy link
Contributor

Hi @kartikjoshi21. Thanks for your PR.

I'm waiting for a github.com member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@kartikjoshi21
Copy link
Contributor Author

Logs:

kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ minikube delete -p calico-v4 --all --purge

./out/minikube start -p calico-v4 \
  --driver=docker \
  --ip-family=ipv4 \
  --cni=calico \
  --service-cluster-ip-range=10.96.0.0/12 \
  --pod-cidr=10.244.0.0/16
🔥  Successfully deleted all profiles
💀  Successfully purged minikube directory located at - [/home/kartikjoshi/.minikube]
📌  Kicbase images have not been deleted. To delete images run:
    ▪ docker rmi gcr.io/k8s-minikube/kicbase:v0.0.48 gcr.io/k8s-minikube/kicbase:v0.0.47
😄  [calico-v4] minikube v1.37.0 on Ubuntu 24.04 (kvm/amd64)
✨  Using the docker driver based on user configuration
📌  Using Docker driver with root privileges
👍  Starting "calico-v4" primary control-plane node in "calico-v4" cluster
🚜  Pulling base image v0.0.48-1763789673-21948 ...
💾  Downloading Kubernetes v1.34.1 preload ...
    > preloaded-images-k8s-v18-v1...:  337.01 MiB / 337.01 MiB  100.00% 3.32 Mi
🔥  Creating docker container (CPUs=2, Memory=3072MB) ...
🐳  Preparing Kubernetes v1.34.1 on Docker 29.0.2 ...
🔗  Configuring Calico (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "calico-v4" cluster and "default" namespace by default


kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ kubectl get ippools.crd.projectcalico.org

kubectl get ippool default-ipv4-ippool -o yaml | sed -n '1,40p'
NAME                  AGE
default-ipv4-ippool   8h
apiVersion: crd.projectcalico.org/v1
kind: IPPool
metadata:
  annotations:
    projectcalico.org/metadata: '{"creationTimestamp":"2025-12-05T09:30:08Z"}'
  creationTimestamp: "2025-12-05T09:30:08Z"
  generation: 1
  name: default-ipv4-ippool
  resourceVersion: "732"
  uid: 10c91ec2-add7-423c-8a37-32e21408203a
spec:
  allowedUses:
  - Workload
  - Tunnel
  assignmentMode: Automatic
  blockSize: 26
  cidr: 10.244.0.0/16
  ipipMode: Always
  natOutgoing: true
  nodeSelector: all()
  vxlanMode: Never


kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ kubectl run v4test --image=nginx:stable --restart=Never
kubectl wait pod v4test --for=condition=Ready --timeout=90s
kubectl exec v4test -- ip -4 addr show dev eth0
kubectl exec v4test -- ip -6 addr show dev eth0 || true
pod/v4test created
pod/v4test condition met

kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ kubectl get pod v4test -o wide
kubectl get pod v4test -o jsonpath='{.status.podIP}{"\n"}'
kubectl get pod v4test -o jsonpath='{.status.podIPs[*].ip}{"\n"}'
NAME     READY   STATUS    RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
v4test   1/1     Running   0          10m   10.244.159.3   calico-v4   <none>           <none>
10.244.159.3
10.244.159.3

kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ ./out/minikube start -p calico-v6   --driver=docker   --ip-family=ipv6   --cni=calico   --service-cluster-ip-range-v6=fd00:210::/108   --pod-cidr-v6=fd11:33::/64
😄  [calico-v6] minikube v1.37.0 on Ubuntu 24.04 (kvm/amd64)
✨  Using the docker driver based on user configuration
📌  Using Docker driver with root privileges
💡  If Docker daemon IPv6 is disabled, enable it in /etc/docker/daemon.json and restart:
  {"ipv6": true, "fixed-cidr-v6": "fd00:55:66::/64"}
👍  Starting "calico-v6" primary control-plane node in "calico-v6" cluster
🚜  Pulling base image v0.0.48-1763789673-21948 ...
💾  Downloading Kubernetes v1.34.1 preload ...
    > preloaded-images-k8s-v18-v1...:  337.01 MiB / 337.01 MiB  100.00% 3.84 Mi
🔥  Creating docker container (CPUs=2, Memory=3072MB) ...
🐳  Preparing Kubernetes v1.34.1 on Docker 29.0.2 ...
🔗  Configuring Calico (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "calico-v6" cluster and "default" namespace by default

kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ kubectl run v6test --image=quay.io/cilium/alpine-curl --restart=Never --command -- sleep 3600
kubectl wait pod v6test --for=condition=Ready --timeout=120s
pod/v6test created
pod/v6test condition met
kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ kubectl exec v6test -- ip -6 addr show dev eth0
kubectl exec v6test -- ip -6 route
3: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 state UP qlen 1000
    inet6 fd11:33::818:5772:a663:ea01/128 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::2cf1:a1ff:fe15:2d0f/64 scope link
       valid_lft forever preferred_lft forever
fd11:33::818:5772:a663:ea01 dev eth0  metric 256
fe80::/64 dev eth0  metric 256
default via fe80::ecee:eeff:feee:eeee dev eth0  metric 1024
multicast ff00::/8 dev eth0  metric 256
kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ kubectl exec v6test -- curl -k -s -o /dev/null -w "%{http_code}\n" https://[fd00:210::1]:443/version
200
kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ kubectl create deploy v6echo --image=nginx:stable
kubectl rollout status deploy/v6echo

kubectl expose deploy v6echo --port=80 --target-port=80
kubectl get svc v6echo -o yaml | sed -n '1,80p'
deployment.apps/v6echo created
Waiting for deployment "v6echo" rollout to finish: 0 of 1 updated replicas are available...
deployment "v6echo" successfully rolled out
service/v6echo exposed
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2025-12-06T18:51:48Z"
  labels:
    app: v6echo
  name: v6echo
  namespace: default
  resourceVersion: "2515"
  uid: e166482a-5dde-4330-967d-4f8718ce465c
spec:
  clusterIP: fd00:210::b:dc96
  clusterIPs:
  - fd00:210::b:dc96
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv6
  ipFamilyPolicy: SingleStack
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: v6echo
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ # Hit the service via IPv6 cluster IP
V6IP=$(kubectl get svc v6echo -o jsonpath='{.spec.clusterIP}')
kubectl exec v6test -- curl -s -g http://[$V6IP] | head

# (Optional) DNS:
kubectl exec v6test -- nslookup v6echo.default.svc.cluster.local kube-dns.kube-system.svc.cluster.local || true
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
Server:         kube-dns.kube-system.svc.cluster.local
Address:        fd00:210::a#53

Name:   v6echo.default.svc.cluster.local
Address: fd00:210::b:dc96


kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ kubectl get ippools.crd.projectcalico.org -A
kubectl get ippools.crd.projectcalico.org -o yaml | sed -n '1,160p'
NAME                  AGE
default-ipv4-ippool   52m
default-ipv6-ippool   52m
apiVersion: v1
items:
- apiVersion: crd.projectcalico.org/v1
  kind: IPPool
  metadata:
    annotations:
      projectcalico.org/metadata: '{"creationTimestamp":"2025-12-06T17:42:13Z"}'
    creationTimestamp: "2025-12-06T17:42:13Z"
    generation: 1
    name: default-ipv4-ippool
    resourceVersion: "583"
    uid: 312bf383-c23e-43aa-9c53-0db3224f77c7
  spec:
    allowedUses:
    - Workload
    - Tunnel
    assignmentMode: Automatic
    blockSize: 26
    cidr: 192.168.0.0/16
    ipipMode: Never
    natOutgoing: true
    nodeSelector: all()
    vxlanMode: Never
- apiVersion: crd.projectcalico.org/v1
  kind: IPPool
  metadata:
    annotations:
      projectcalico.org/metadata: '{"creationTimestamp":"2025-12-06T17:42:13Z"}'
    creationTimestamp: "2025-12-06T17:42:13Z"
    generation: 1
    name: default-ipv6-ippool
    resourceVersion: "584"
    uid: 636dadfe-0046-417b-bb3d-0bcd8dd5ba17
  spec:
    allowedUses:
    - Workload
    - Tunnel
    assignmentMode: Automatic
    blockSize: 122
    cidr: fd11:33::/64
    ipipMode: Never
    nodeSelector: all()
    vxlanMode: Never
kind: List
metadata:
  resourceVersion: ""

kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ ./out/minikube start -p calico-dual   --driver=docker   --ip-family=dual
   --cni=calico   --service-cluster-ip-range=10.96.0.0/12   --service-cluster-ip-range-v6=fd00:220::/108   --pod-cidr=10.244.0.0/16   --p
od-cidr-v6=fd11:44::/64  --container-runtime=containerd
😄  [calico-dual] minikube v1.37.0 on Ubuntu 24.04 (kvm/amd64)
✨  Using the docker driver based on user configuration
📌  Using Docker driver with root privileges
💡  If Docker daemon IPv6 is disabled, enable it in /etc/docker/daemon.json and restart:
  {"ipv6": true, "fixed-cidr-v6": "fd00:55:66::/64"}
👍  Starting "calico-dual" primary control-plane node in "calico-dual" cluster
🚜  Pulling base image v0.0.48-1763789673-21948 ...
💾  Downloading Kubernetes v1.34.1 preload ...
    > preloaded-images-k8s-v18-v1...:  390.07 MiB / 390.07 MiB  100.00% 3.58 Mi
🔥  Creating docker container (CPUs=2, Memory=3072MB) ...
📦  Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
🔗  Configuring Calico (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "calico-dual" cluster and "default" namespace by default
kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ kubectl get pods -A
NAMESPACE     NAME                                      READY   STATUS     RESTARTS   AGE
kube-system   calico-kube-controllers-68f8b9d99-crtxl   0/1     Pending    0          67s
kube-system   calico-node-722gw                         0/1     Init:1/4   0          67s
kube-system   coredns-66bc5c9577-6xl7c                  0/1     Pending    0          67s
kube-system   etcd-calico-dual                          1/1     Running    0          72s
kube-system   kube-apiserver-calico-dual                1/1     Running    0          75s
kube-system   kube-controller-manager-calico-dual       1/1     Running    0          72s
kube-system   kube-proxy-79gb8                          1/1     Running    0          67s
kube-system   kube-scheduler-calico-dual                1/1     Running    0          75s
kube-system   storage-provisioner                       0/1     Pending    0          64s
kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ kubectl get pods -A
NAMESPACE     NAME                                      READY   STATUS     RESTARTS   AGE
kube-system   calico-kube-controllers-68f8b9d99-crtxl   0/1     Pending    0          69s
kube-system   calico-node-722gw                         0/1     Init:1/4   0          69s
kube-system   coredns-66bc5c9577-6xl7c                  0/1     Pending    0          69s
kube-system   etcd-calico-dual                          1/1     Running    0          74s
kube-system   kube-apiserver-calico-dual                1/1     Running    0          77s
kube-system   kube-controller-manager-calico-dual       1/1     Running    0          74s
kube-system   kube-proxy-79gb8                          1/1     Running    0          69s
kube-system   kube-scheduler-calico-dual                1/1     Running    0          77s
kube-system   storage-provisioner                       0/1     Pending    0          66s
kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ kubectl get pods -A
NAMESPACE     NAME                                      READY   STATUS              RESTARTS   AGE
kube-system   calico-kube-controllers-68f8b9d99-crtxl   0/1     ContainerCreating   0          113s
kube-system   calico-node-722gw                         1/1     Running             0          113s
kube-system   coredns-66bc5c9577-6xl7c                  1/1     Running             0          113s
kube-system   etcd-calico-dual                          1/1     Running             0          118s
kube-system   kube-apiserver-calico-dual                1/1     Running             0          2m1s
kube-system   kube-controller-manager-calico-dual       1/1     Running             0          118s
kube-system   kube-proxy-79gb8                          1/1     Running             0          113s
kube-system   kube-scheduler-calico-dual                1/1     Running             0          2m1s
kube-system   storage-provisioner                       1/1     Running             0          110s
kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ kubectl get pods -A
NAMESPACE     NAME                                      READY   STATUS              RESTARTS   AGE
kube-system   calico-kube-controllers-68f8b9d99-crtxl   0/1     ContainerCreating   0          115s
kube-system   calico-node-722gw                         1/1     Running             0          115s
kube-system   coredns-66bc5c9577-6xl7c                  1/1     Running             0          115s
kube-system   etcd-calico-dual                          1/1     Running             0          2m
kube-system   kube-apiserver-calico-dual                1/1     Running             0          2m3s
kube-system   kube-controller-manager-calico-dual       1/1     Running             0          2m
kube-system   kube-proxy-79gb8                          1/1     Running             0          115s
kube-system   kube-scheduler-calico-dual                1/1     Running             0          2m3s
kube-system   storage-provisioner                       1/1     Running             0          112s
kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ kubectl get pods -A
NAMESPACE     NAME                                      READY   STATUS              RESTARTS   AGE
kube-system   calico-kube-controllers-68f8b9d99-crtxl   0/1     ContainerCreating   0          2m10s
kube-system   calico-node-722gw                         1/1     Running             0          2m10s
kube-system   coredns-66bc5c9577-6xl7c                  1/1     Running             0          2m10s
kube-system   etcd-calico-dual                          1/1     Running             0          2m15s
kube-system   kube-apiserver-calico-dual                1/1     Running             0          2m18s
kube-system   kube-controller-manager-calico-dual       1/1     Running             0          2m15s
kube-system   kube-proxy-79gb8                          1/1     Running             0          2m10s
kube-system   kube-scheduler-calico-dual                1/1     Running             0          2m18s
kube-system   storage-provisioner                       1/1     Running             0          2m7s
kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ kubectl get pods -A
NAMESPACE     NAME                                      READY   STATUS              RESTARTS   AGE
kube-system   calico-kube-controllers-68f8b9d99-crtxl   0/1     ContainerCreating   0          2m12s
kube-system   calico-node-722gw                         1/1     Running             0          2m12s
kube-system   coredns-66bc5c9577-6xl7c                  1/1     Running             0          2m12s
kube-system   etcd-calico-dual                          1/1     Running             0          2m17s
kube-system   kube-apiserver-calico-dual                1/1     Running             0          2m20s
kube-system   kube-controller-manager-calico-dual       1/1     Running             0          2m17s
kube-system   kube-proxy-79gb8                          1/1     Running             0          2m12s
kube-system   kube-scheduler-calico-dual                1/1     Running             0          2m20s
kube-system   storage-provisioner                       1/1     Running             0          2m9s
kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ kubectl get pods -A
NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-68f8b9d99-crtxl   1/1     Running   0          2m25s
kube-system   calico-node-722gw                         1/1     Running   0          2m25s
kube-system   coredns-66bc5c9577-6xl7c                  1/1     Running   0          2m25s
kube-system   etcd-calico-dual                          1/1     Running   0          2m30s
kube-system   kube-apiserver-calico-dual                1/1     Running   0          2m33s
kube-system   kube-controller-manager-calico-dual       1/1     Running   0          2m30s
kube-system   kube-proxy-79gb8                          1/1     Running   0          2m25s
kube-system   kube-scheduler-calico-dual                1/1     Running   0          2m33s
kube-system   storage-provisioner                       1/1     Running   0          2m22s
kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ # Node + runtime
kubectl get nodes -o wide

# Pod CIDRs on the node
kubectl get node calico-dual -o jsonpath='{.spec.podCIDRs}{"\n"}'

# Calico + core Kubernetes pods
kubectl get pods -A
NAME          STATUS   ROLES           AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION                     CONTAINER-RUNTIME
calico-dual   Ready    control-plane   2m37s   v1.34.1   172.21.0.2    <none>        Debian GNU/Linux 12 (bookworm)   6.6.87.2-microsoft-standard-WSL2   containerd://2.1.5
["10.244.0.0/24","fd11:44::/64"]
NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-68f8b9d99-crtxl   1/1     Running   0          2m28s
kube-system   calico-node-722gw                         1/1     Running   0          2m28s
kube-system   coredns-66bc5c9577-6xl7c                  1/1     Running   0          2m28s
kube-system   etcd-calico-dual                          1/1     Running   0          2m33s
kube-system   kube-apiserver-calico-dual                1/1     Running   0          2m36s
kube-system   kube-controller-manager-calico-dual       1/1     Running   0          2m33s
kube-system   kube-proxy-79gb8                          1/1     Running   0          2m28s
kube-system   kube-scheduler-calico-dual                1/1     Running   0          2m36s
kube-system   storage-provisioner                       1/1     Running   0          2m25s

kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ kubectl wait pod dualtest --for=condition=Ready --timeout=120s
kubectl get pod dualtest -o wide

# PodIPs reported by Kubernetes
kubectl get pod dualtest -o jsonpath='{.status.podIPs[*].ip}{"\n"}'

# Inside pod: v4/v6 addresses + routes
kubectl exec dualtest -- ip -4 addr show dev eth0
kubectl exec dualtest -- ip -6 addr show dev eth0
kubectl exec dualtest -- ip -4 route
kubectl exec dualtest -- ip -6 route
pod/dualtest condition met
NAME       READY   STATUS    RESTARTS   AGE   IP             NODE          NOMINATED NODE   READINESS GATES
dualtest   1/1     Running   0          35s   10.244.109.2   calico-dual   <none>           <none>
10.244.109.2 fd11:44::4116:a95c:cd84:6d02
3: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP qlen 1000
    inet 10.244.109.2/32 scope global eth0
       valid_lft forever preferred_lft forever
3: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 state UP qlen 1000
    inet6 fd11:44::4116:a95c:cd84:6d02/128 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::7474:25ff:fe76:d6c0/64 scope link
       valid_lft forever preferred_lft forever
default via 169.254.1.1 dev eth0
169.254.1.1 dev eth0 scope link
fd11:44::4116:a95c:cd84:6d02 dev eth0  metric 256
fe80::/64 dev eth0  metric 256
default via fe80::ecee:eeff:feee:eeee dev eth0  metric 1024
multicast ff00::/8 dev eth0  metric 256

kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ kubectl get svc kubernetes -o wide
kubectl get svc kubernetes -o yaml | sed -n '1,80p'
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE     SELECTOR
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   4m20s   <none>
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2025-12-08T07:39:57Z"
  labels:
    component: apiserver
    provider: kubernetes
  name: kubernetes
  namespace: default
  resourceVersion: "239"
  uid: fbdce024-c759-443c-83c5-6e34231da642
spec:
  clusterIP: 10.96.0.1
  clusterIPs:
  - 10.96.0.1
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: 8443
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ APISVC=$(kubectl get svc kubernetes -o jsonpath='{.spec.clusterIP}')

kubectl exec dualtest -- \
  curl -k -s -o /dev/null -w "apiv4:%{http_code}\n" https://$APISVC:443/version
apiv4:200
kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ kubectl get svc kube-dns -n kube-system -o yaml | sed -n '1,80p'
apiVersion: v1
kind: Service
metadata:
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  creationTimestamp: "2025-12-08T07:39:58Z"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: CoreDNS
  name: kube-dns
  namespace: kube-system
  resourceVersion: "512"
  uid: fce0962e-e159-4fd5-a99d-f234d269543a
spec:
  clusterIP: 10.96.0.10
  clusterIPs:
  - 10.96.0.10
  - fd00:220::c:68e6
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  - IPv6
  ipFamilyPolicy: PreferDualStack
  ports:
  - name: dns
    port: 53
    protocol: UDP
    targetPort: 53
  - name: dns-tcp
    port: 53
    protocol: TCP
    targetPort: 53
  - name: metrics
    port: 9153
    protocol: TCP
    targetPort: 9153
  selector:
    k8s-app: kube-dns
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ DNSV4=$(kubectl get svc kube-dns -n kube-system -o jsonpath='{.spec.clusterIPs[0]}')
DNSV6=$(kubectl get svc kube-dns -n kube-system -o jsonpath='{.spec.clusterIPs[1]}')

echo "kube-dns Service IPs: V4=$DNSV4 V6=$DNSV6"
kube-dns Service IPs: V4=10.96.0.10 V6=fd00:220::c:68e6
kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ kubectl exec dualtest -- \
  nslookup kubernetes.default.svc.cluster.local kube-dns.kube-system.svc.cluster.local
Server:         kube-dns.kube-system.svc.cluster.local
Address:        10.96.0.10#53

Name:   kubernetes.default.svc.cluster.local
Address: 10.96.0.1

kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ kubectl delete deploy dual-echo --ignore-not-found
kubectl delete svc dual-echo-dual --ignore-not-found

kubectl create deploy dual-echo --image=nginx:stable
kubectl rollout status deploy/dual-echo

kubectl get pods -l app=dual-echo -o wide
deployment.apps/dual-echo created
Waiting for deployment "dual-echo" rollout to finish: 0 of 1 updated replicas are available...
deployment "dual-echo" successfully rolled out
NAME                         READY   STATUS    RESTARTS   AGE   IP             NODE          NOMINATED NODE   READINESS GATES
dual-echo-76c67c76cf-5qwkn   1/1     Running   0          48s   10.244.109.3   calico-dual   <none>           <none>
kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ POD=$(kubectl get pod -l app=dual-echo -o jsonpath='{.items[0].metadata.name}')
kubectl get pod "$POD" -o jsonpath='{.status.podIPs[*].ip}{"\n"}'
10.244.109.3 fd11:44::4116:a95c:cd84:6d03
kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ cat <<'EOF' | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
  name: dual-echo-dual
  labels:
    app: dual-echo-dual
spec:
  selector:
    app: dual-echo
  type: ClusterIP
  ipFamilyPolicy: PreferDualStack
  ipFamilies:
    - IPv4
    - IPv6
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: TCP
EOF
service/dual-echo-dual created
kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ kubectl get svc dual-echo-dual -o yaml | sed -n '1,80p'

kubectl get endpointslices.discovery.k8s.io \
  -l kubernetes.io/service-name=dual-echo-dual \
  -o yaml | sed -n '1,200p'
apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"dual-echo-dual"},"name":"dual-echo-dual","namespace":"default"},"spec":{"ipFamilies":["IPv4","IPv6"],"ipFamilyPolicy":"PreferDualStack","ports":[{"name":"http","port":80,"protocol":"TCP","targetPort":80}],"selector":{"app":"dual-echo"},"type":"ClusterIP"}}
  creationTimestamp: "2025-12-08T07:48:14Z"
  labels:
    app: dual-echo-dual
  name: dual-echo-dual
  namespace: default
  resourceVersion: "979"
  uid: 65ab9493-1b6f-4588-8007-31490411f694
spec:
  clusterIP: 10.110.83.39
  clusterIPs:
  - 10.110.83.39
  - fd00:220::3:9f8d
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  - IPv6
  ipFamilyPolicy: PreferDualStack
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: dual-echo
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
apiVersion: v1
items:
- addressType: IPv4
  apiVersion: discovery.k8s.io/v1
  endpoints:
  - addresses:
    - 10.244.109.3
    conditions:
      ready: true
      serving: true
      terminating: false
    nodeName: calico-dual
    targetRef:
      kind: Pod
      name: dual-echo-76c67c76cf-5qwkn
      namespace: default
      uid: b0b3bf7f-c0c2-4a61-b0cd-cb2925b351d0
  kind: EndpointSlice
  metadata:
    annotations:
      endpoints.kubernetes.io/last-change-trigger-time: "2025-12-08T07:48:14Z"
    creationTimestamp: "2025-12-08T07:48:14Z"
    generateName: dual-echo-dual-
    generation: 1
    labels:
      app: dual-echo-dual
      endpointslice.kubernetes.io/managed-by: endpointslice-controller.k8s.io
      kubernetes.io/service-name: dual-echo-dual
    name: dual-echo-dual-97nbr
    namespace: default
    ownerReferences:
    - apiVersion: v1
      blockOwnerDeletion: true
      controller: true
      kind: Service
      name: dual-echo-dual
      uid: 65ab9493-1b6f-4588-8007-31490411f694
    resourceVersion: "981"
    uid: af1ad32d-08ba-41ee-82aa-3d2bbbfd4a94
  ports:
  - name: http
    port: 80
    protocol: TCP
- addressType: IPv6
  apiVersion: discovery.k8s.io/v1
  endpoints:
  - addresses:
    - fd11:44::4116:a95c:cd84:6d03
    conditions:
      ready: true
      serving: true
      terminating: false
    nodeName: calico-dual
    targetRef:
      kind: Pod
      name: dual-echo-76c67c76cf-5qwkn
      namespace: default
      uid: b0b3bf7f-c0c2-4a61-b0cd-cb2925b351d0
  kind: EndpointSlice
  metadata:
    annotations:
      endpoints.kubernetes.io/last-change-trigger-time: "2025-12-08T07:48:14Z"
    creationTimestamp: "2025-12-08T07:48:14Z"
    generateName: dual-echo-dual-
    generation: 1
    labels:
      app: dual-echo-dual
      endpointslice.kubernetes.io/managed-by: endpointslice-controller.k8s.io
      kubernetes.io/service-name: dual-echo-dual
    name: dual-echo-dual-tqbth
    namespace: default
    ownerReferences:
    - apiVersion: v1
      blockOwnerDeletion: true
      controller: true
      kind: Service
      name: dual-echo-dual
      uid: 65ab9493-1b6f-4588-8007-31490411f694
    resourceVersion: "982"
    uid: eed6f99c-8925-409f-b84e-f2944b530099
  ports:
  - name: http
    port: 80
    protocol: TCP
kind: List
metadata:
  resourceVersion: ""
kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$
kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ V4=$(kubectl get svc dual-echo-dual -o jsonpath='{.spec.clusterIPs[0]}')
V6=$(kubectl get svc dual-echo-dual -o jsonpath='{.spec.clusterIPs[1]}')
echo "dual-echo-dual Service IPs: V4=$V4 V6=$V6"
dual-echo-dual Service IPs: V4=10.110.83.39 V6=fd00:220::3:9f8d
kartikjoshi@kartikjoshiwindows:~/minikube-ipv6-support/minikube$ # IPv4 VIP
kubectl exec dualtest -- \
  curl -s -g -o /dev/null -w "v4:%{http_code}\n" http://$V4:80/

# IPv6 VIP
kubectl exec dualtest -- \
  curl -s -g -o /dev/null -w "v6:%{http_code}\n" http://[$V6]:80/

# DNS-based (prefer v4/v6 depending on kube-proxy / DNS behaviour)
kubectl exec dualtest -- \
  curl -s -g -o /dev/null -w "dns:%{http_code}\n" http://dual-echo-dual.default.svc.cluster.local/
v4:200
v6:200
dns:200

@minikube-bot
Copy link
Collaborator

Can one of the admins verify this patch?

Copy link
Member

@medyagh medyagh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you plz add a Before/After this PR or maybe run an example in the description

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants